Fusion provides the ability to run jobs against your data collections. To create or configure the jobs detailed in reference topics in this section, sign in to Fusion and click Collections > Jobs. Then click Add+ to create a new job or select an existing job you want to configure. The jobs described in this section have aDocumentation Index
Fetch the complete documentation index at: https://doc.lucidworks.com/llms.txt
Use this file to discover all available pages before exploring further.
subtype property with one of these values:
-
“task”. Jobs of this type include the following:
- REST HTTP calls that run REST/HTTP commands
- Log cleanup job that deletes log messages from the system logs collection
-
“spark”. Jobs of this type process data and include the following:
- SQL Aggregation job that injects user-defined parameters into a SQL template
- Custom Python job that runs Python code using Fusion
- Custom Spark job that runs a custom JAR file
- Script that runs a custom Scala script using Fusion
- Supervised classification jobs such as Build Training Data and Classification
- Recommendation Jobs such as BPR Recommender and Content based Recommender
- Cluster Labeling and Document Clustering jobs
Jobs with the a subtype of
datasource have configuration schemas that depend on the connector type. For more information, see Connectors Configuration Reference. You cannot create, run, or schedule datasource subtype jobs in the Collections > Jobs screen.Managing and Scheduling Jobs
The quick learning for Managing and Scheduling Jobs focuses on how to create, configure, and schedule jobs using the Fusion UI.