Ground Truth
Ground truth or gold standard datasets are used in the ground truth jobs and query relevance metrics to define a specific set of documents.
These jobs produce data that can be used for query rewriting or to inform updates to the synonyms.txt file.
Head/Tail Analysis
Perform head/tail analysis of queries from collections of raw or aggregated signals, to identify underperforming queries and the reasons. This information is valuable for improving overall conversions, Solr configurations, auto-suggest, product catalogs, and SEO/SEM strategies, in order to improve conversion rates.
Synonym Detection
Use this job to generate pairs of synonyms and pairs of similar queries. Two words are considered potential synonyms when they are used in a similar context in similar queries.
Token and Phrase Spell Correction
Detect misspellings in queries or documents using the numbers of occurrences of words and phrases.
Ranking Metrics
Use this job to calculate relevance metrics by replaying ground truth queries against catalog data using variants from an experiment. Metrics include Normalized Discounted Cumulative Gain (nDCG) and others.
BPR Recommender
Use this job when you want to compute user recommendations or item similarities using a Bayesian Personalized Ranking (BPR) recommender algorithm.
Query-to-Query Session-Based Similarity
This recommender is based on co-occurrence of queries in the context of clicked documents and sessions. It is useful when your data shows that users tend to search for similar items in a single search session. This method of generating query-to-query recommendations is faster and more reliable than the Query-to-Query Similarity recommender job, and is session-based unlike the similar queries previously generated as part of the Synonym Detection job.
Cluster Labeling
Cluster labeling jobs are run against your data collections, and are used:
When clusters or well-defined document categories already exist
When you want to discover and attach keywords to see representative words within existing clusters
Document Clustering
The Document Clustering job uses an unsupervised machine learning algorithm to group documents into clusters based on similarities in their content. You can enable more efficient document exploration by using these clusters as facets, high-level summaries or themes, or to recommend other documents from the same cluster. The job can automatically group similar documents in all kinds of content, such as clinical trials, legal documents, book reviews, blogs, scientific papers, and products.
Classification
This job analyzes how your existing documents are categorized and produces a classification model that can be used to predict the categories of new documents at index time.
Outlier Detection
Outlier detection jobs are run against your data collections, and also perform the following actions:
Identify information that significantly differs from other data in the collection
The Parallel Bulk Loader (PBL) job enables bulk ingestion of structured and semi-structured data from big data systems, NoSQL databases, and common file formats like Parquet and Avro.Datasources the PBL uses include not only common file formats, but Solr databases, JDBC-compliant databases, MongoDB databases and more.In addition, the PBL distributes the load across the Managed Fusion Spark cluster to optimize performance. And because no parsing is needed, indexing performance is also maximized by writing directly to Solr.For more information about usage and detailed configuration, see Parallel Bulk Loader configuration reference.