ids
stringlengths
36
36
texts
stringlengths
212
1.03k
4f2034a7-ac6c-489a-881d-3aef4f1d0c0d
Industrial Language-Image Dataset (ILID): Adapting Vision Foundation Models for Industrial Settings Keno Moenck1,*Duc Trung Thieu1Julian Koch1Thorsten Sch ¨uppstuhl1 1Hamburg University of Technology, Institute of Aircraft Production Technology github.com/kenomo/ilid In recent years, the upstream of Large Language Models (LLM) has also encouraged the computer vision community to work on substantial multimodal datasets and train mod- els on a scale in a self-/semi-supervised manner, resulting in Vision Foundation Models (VFM), as, e.g., Contrastive Language–Image Pre-training (CLIP). The models gener- alize well and perform outstandingly on everyday objects or scenes, even on downstream tasks, tasks the model has not been trained on, while the application in specialized do- mains, as in an industrial context, is still an open research question.
1db71561-d2b5-4705-9d1a-510b84df1992
Here, fine-tuning the models or transfer learning on domain-specific data is unavoidable when objecting to adequate performance. In this work, we, on the one hand, introduce a pipeline to generate the Industrial Language- Image Dataset (ILID) based on web-crawled data; on the other hand, we demonstrate effective self-supervised trans- fer learning and discussing downstream tasks after train- ing on the cheaply acquired ILID, which does not neces- sitate human labeling or intervention. With the proposed approach, we contribute by transferring approaches from state-of-the-art research around foundation models, trans- fer learning strategies, and applications to the industrial domain. 1. Introduction Machine vision technologies facilitated by deep learning usually outperform traditional methods, especially in dy- namic and open settings.
bf8e7d1a-62eb-43ac-a9b5-16e6796b80e8
In the scope of training deep mod- els, industrial contexts1lack everyday objects and scenes, typically covered by publicly available datasets, which is why applications in these specialized domains here demand *Corresponding author: keno.moenck@tuhh.de 1We define the industrial domain as follows: industrial activities serve to produce consumable goods or a capital asset, which includes production as the superordinate term involving all processes around it, including activ- ities from manufacturing, assembly, logistics, or finance. In addition, tasks in the later lifecycle of a product, like Maintenance, Repair, and Overhaul (MRO), also belong to industrial activities. Vision applications are typi- cally closer to the shopfloor than the topfloor.custom datasets, e.g., synthetically generated [1–4], which model the specific object and sensor domain.
bbdcb2e9-d7eb-4157-959f-38d1e2a4c0f5
The availability of curated, publicly accessible datasets specific to industrial needs is exceedingly sparse, e.g., the MVTec [5–8], VISION [9], or tool recognition [10] datasets encapsulate only a limited spectrum of objects and sup- port only a handful of trainable tasks based on the provided ground truth annotations. Besides the need for training data, fine-tuning, domain adaptation, or transfer learning, trans- ferring a model from a source to a related target, e.g., ob- ject/scene/sensor domain, is ineluctable, which can reduce the necessary samples per conceptual class to only a few shots during training. The model’s pre-training is the crit- ical point here, where training data size, variability, and model size directly relate to the overall performance [11]. Large-scale pre-trained foundation models represent a paradigm shift in Artificial Intelligence (AI), characterized by extensive self-supervised training [12].
2cd18c18-9a39-419d-99e6-a944263c0bb5
These models, e.g., BERT [13], the well-known GPT-n series [14–16], or Llama [17–19], learn rich knowledge representations capa- ble of transcending to various downstream tasks. The shift in AI drives single tasks and single-modalities learners to a paradigm encompassing diverse tasks and multimodalities, which more closely mimics human perception and cogni- tive processes. Following Large Language Models (LLM), Vision Foundation Models (VFM) have been upstreamed in the last few years, capable of supporting not only 2D or even 3D modalities but also language [20]. Data for training at scale is typically web-crawled from the vast re- sources of the Internet, which then demands sophisticated post-processing, posing a variety of challenges [21, 22].
25991450-a7f3-4f85-a8be-0e2832a4b0fd
Be- sides, given such large, partially unstructured datasets, only self-supervised or unsupervised methods are able to learn from the data effectively. A self-supervised approach capable of learning from text and image modalities is contrastive learning, in which a model learns to distinguish between positive and negative combinations of samples, firstly, nearly concurrently, pre- sented by CLIP [23] and ALIGN [24] at a large scale. Con- 1arXiv:2406.09637v1 [cs.CV] 14 Jun 2024
525e99cf-50bc-4f82-8020-5791a0cd2a22
trastive learning by contrasting positive and negative sam- ples in a batch, in the case of vision and language, is based on a text and image encoder. The idea is that the encoders are trained to output embeddings for the image and text, in- creasing the similarities of positive samples by decreasing the distance in the joint embedding space and increasing the distance of negative samples. Employing a text encoder allows for natural language supervision, relaxing the neces- sity of fixed classes as in the case of training traditional deep learning models like a ResNet [25]. This fact makes as- sembling a dataset at a scale less laborious since assigning an image to a fixed class omits, enabling learning from un- structured data. Different language-image datasets of scale have emerged, ranging from 12M [22] to 5B [21] samples.
29480e67-4a8c-45bd-8e09-382af24a9a92
Since they are based on web-available data, not all cleaned, post-processed, and curated datasets are published, as in the case of CLIP. „… levelling feet round “ 0.64 „… collet “ 0.24 „… aluminium profile “ 0.05 „… button “ 0.03 „…“ … „… button “ 0.33 „… collet “ 0.22 „… magnetic ball joint “ 0.22 „… axial joint “ 0.10 „…“ … (a) Ours (b) Zero - shot CLIP (baseline) Prompt Prompt Score Score Figure 1.
71f07a77-fb41-497a-ad90-e342aecd5cd6
CLIP on the task of classification after (a) transfer learn- ing on the Industrial Language-Image Dataset (ILID) and (b) the zero-shot baseline results. VFMs exhibit rich knowledge representations, are adapt- able to various downstream tasks, and generalize better than conventional models, but only to a certain extent in novel and out-of-distribution domains, necessitating fine-tuning or transfer learning. As demonstrated in Fig. 1, the zero- shot model CLIP, given a highly out-of-distribution image, does not predict nor even close to the ground truth. As already outlined, in the industrial domain, we face non- everyday objects and scenes, which is why we can not rely on commonly available datasets for fine-tuning or transfer learning, which also inhibits the use of VFM here.
e589ab17-f7e4-46b5-91da-6eeab7db4980
In this work, we try to make a step in the direction of utilizing VFM capabilities in specialized industrial domains by con- tributing three-folded: • We propose a method to generate the IndustrialLanguage-Image Dataset (ILID) from web-crawled data and release a version that covers objects from different industrial-related domains2. We publish the pipeline to generate the ILID at github.com/kenomo/ilid. • We effectively demonstrate transfer learning to CLIP with the given dataset, which outperforms CLIP’s zero-shot capabilities. • We elaborate on different tasks that serve indus- trial domain-related vision applications. We pub- lish the training- and evaluation-related code here github.com/kenomo/industrial-clip. This work focuses on utilizing CLIP rather than other vision-language models due to the significant established usage and fine-tuning/transfer learning strategies.
4c4b78fe-25b8-45cb-9b4a-86b79f805c54
Besides, comparing only one established model on the data increases the focus, clarity, and depth of the findings in the scope of this work. Nevertheless, we encourage the reuse of ILID with other strategies or also employ further fine-tuning and transfer learning strategies. The rest of this work is structured as follows: First, we outline in Sec. 2 existing applications of VFMs in industrial applications, introduce Contrastive Image-Language Pre- training (CLIP) and current existing fine-tuning/transfer- learning approaches. In Sec. 3, we present our overall method of generating the dataset as well as our training pro- cedure. We elaborate on our extensive experiments in Sec. 4. We conclude and discuss this work in Sec. 5. 2. Related Works 2.1.
b4a9809a-e533-4111-88e8-ef4bd77d466d
VFMs in industrial applications Code recognition, object or position recognition, complete- ness, shape/dimension check, or quantitive or qualitative inspection are typical vision applications in manufacturing [26]. While in manufacturing, these are often suited toward narrow fields of view and close to the object; in the neigh- boring domain, intralogistics, tasks are besides close ones, like inspecting load carriers for trash, contamination, and damage or documentation, verification, assistance, and au- tomation, perceiving the environment is often of interest, which results in, e.g., foreign debris detection or tracking objects [27, 28]. The first step in the perception pipeline of these applications is typically a fundamental vision task, e.g., in the 2D domain, giving each pixel semantic and clus- tering pixel to semantically meaningful regions. Then fol- lows additional enhancing the output with further semantics and finally forming the application-specific decision used in, e.g., part of a production system.
34dca95d-b4e3-4aa5-88da-46ff6d073f03
2Since the data from the web do not belong to us, we are not allowed to publish the images and texts, but we provide the final post-processed metadata, which can be used to reassemble the dataset. Please contact the corresponding author. 2
b9b7dfea-5a24-475a-bb5a-490955f095f8
(3) (Downstream) tasks (1) Industrial Language - Image Dataset (ILID) Web catalog crawling Dataset processing Image Image Classification Segmentation „…hinge…“ „…handle…“ „… rod end…“ „…“ Text Encoder Image Encoder maximizing the score for non - contrasting samples { bores for counter - sunk screws } { h inge, detachable } (2) Transfer learning Text [ { label_short : „hinge“, label_long : „hinge with countersunk bores“, description: „ Detachable hinge with high - quality die - casting process “, material: „ zinc die casting “, material_finish : „powder coated, black“, … }, { … }, … ] [ { label: „ Hinge M zinc die casting,
a806b244-2c38-443b-80f2-9697786a49e8
detachable , type R, xmm “ , data: [ „Hinges are distinguished by their compact and robust construction. The assortment of materials...”, „Zinc die casting ...”, „...“, ] }, { … }, … ] TextFigure 2. Overview of this work’s method: (1) generation of the Industrial Language-Image Dataset (ILID), (2) transfer learning using the ILID, and (3) evaluating the performance in different tasks. Up to this date, there exists only a small set of publica- tions on the topic of utilizing VFMs in one or more steps of such vision pipelines, which we give a small excerpt in the following: On a broader scale [29] explore use cases for deploying VFMs in the industrial context without de- signing or elaborating on specific architectures and how to train, fine-tune, or do transfer learning.
53a75ba1-4243-475f-9a64-ddb1fd3f6b2c
[28] discusses the abilities of the Segment Anything Model (SAM), a class- agnostic segmentation model that outstandingly generalizes to unseen objects and scenes, in the context of vision ap- plications in the aircraft industry, including manufacturing, intralogistics, and MRO. [30] name two use cases in PCB defect inspection and industrial human action recognition. Current literature throws up ideas on utilizing LLMs, e.g. [31], or VFMs, e.g., [28–30, 32], in the industrial do- main; little is known about how to enable VFM to perform effectively in specific use cases. Besides having suitable datasets, training with the data demands specific strategies. We will elaborate on the aspects in the following sections. 2.2. Contrastive Language-Image Pre-training (CLIP) CLIP learns rich image-text representations from natural language supervision utilizing natural language as a pre- diction space to reach higher performance in generalization and transfer.
b875aca6-6228-4e2b-b387-4d0bd9d96e3a
It is not an entirely novel approach; however,the origin of the idea of learning from perceptions in natural language is not exactly dated to specific research. In 1999, [33] explored retrieving words for unknown images based on statistical learning to predict nouns and adjectives. In 2007, [34] demonstrated learning image representations us- ing manifold learning to predict words for images. Recent approaches that emerged before CLIP and learn visual rep- resentations from text are Visual representations from Tex- tual annotations (VirTex) [35], Image-Conditioned Masked Language Modeling (ICMLM) [36], and Contrastive Visual Representation Learning from Text (ConVIRT) [37]. 2.2.1 Contrastive learning A contrastive learning model consists of two main compo- nents: (1) an encoder for all input modalities and (2) a loss function measuring the similarity between positive and neg- ative pairs.
aed90d94-616b-44e3-8402-7f73f488c913
The encoder can be reused from other models and training, e.g., demonstrated by OpenScene [38], which employs a frozen text and 2D image encoder while training a 3D point cloud encoder for language-based 2D/3D scene understanding. The encoder models are trained to comple- ment and comprehend each other fully by encoding similar concepts of images and text in similar vectors. That is, a text representing ”photo of a hinge” would output a similar vector as the image counterpart and be further away from 3
ad48f083-191b-4bb7-8906-d06fd1105b3a
images that are not connected, as shown in Fig. 3. Besides prompting for the object’s name, a sufficiently trained text encoder would encode, e.g., conceptual close activities near the object’s name embedding (s. Fig. 3). {photo of a hinge} {cat} {house} {gripping} Figure 3. Joint embedding space of text and image representations: conceptually similar texts and images are encoded close to each other, dissimilar pairings do not share similar positions. The self-supervised pre-training of CLIP followed: Given Npairs of image and text, CLIP estimates the sim- ilarity for all the possible N×Npairings. With each text and image pair in the multimodal vector space, the models inside CLIP are jointly trained to maximize the similarity of each positive pairing and, at the same time, minimize the similarity of N×N−Nantagonistic pairs (s. Fig. 2).
50dd7115-c548-4643-93c9-61ad337f8d0a
The embedding similarities between pairs are represented by the cosine similarity metric, which is used to optimize the cross-entropy loss in order to build the most optimized versions of both the image and text encoder at the same time. 2.2.2 Performance Zero-shot CLIP achieves similar performance or even out- performs conventional fully supervised class-wise models while preserving robustness through the ability to learn from a broader range of representations from natural lan- guage, especially on in-distribution or slightly out-of- distribution data. On the other hand, zero-shot CLIP weakly performs on datasets that are far out-of-distribution, such as satellite images (EuroSAT [39], NWPU-RESISC45 [40]) or tumors (PatchCamelyon [41]) [23].
834949f2-3bdf-42af-a550-565095b05870
When comparing CLIP and other large pre-trained n-shot models such as BiT-M [42] and SimCLRv2 [43], CLIP’s authors depict that the zero-shot performance outperforms all other models on the metric of average accuracy score up to 4-shot linear classi- fiers trained on the same dataset [23]. The limitations arethat scaling the model to learn from more data has steadily increased the performance, but computing power increases exponentially, which is currently barely economically rea- sonable. 2.2.3 Recent development Meanwhile, much work exists on further development and adaptations of CLIP [44–51]. The most notable works are SLIP [50], DeCLIP [44], ReCLIP [46], CoCa [47], and FILIP [49], aiming to improve efficiency in the train- ing process. SLIP combines language supervision and im- age self-supervision to improve performance further.
031e103c-a527-4ce7-9ff4-35108a47f1f4
De- CLIP employs supervision across modalities as well as self- supervision within each modality, whereas ReCLIP at first learns pseudo labels and then applies cross-modality self- supervision. CoCA, on the other hand, skips cross-attention in some decoder layers to encode unimodal text represen- tations and cross-attend the remaining layers with the im- age encoder. By using contrastive loss between unimodal image and text embeddings, along with captioning loss for multimodal decoder outputs, CoCa efficiently trains on a wider variety of data with minimal overhead. Improved fine-grained performance of CLIP is demonstrated in the works of FILIP [49], where instead of contrastive loss be- ing calculated from global features of an entire image and text sequence, token-wise cross-modal interaction is mod- eled to take into account image patches and textual tokens more fine-grained.
99376fdf-7135-4b57-a274-4e64cd677111
Since this work focuses mainly on the training data, we will not evaluate all the individual strategies that aim to in- crease performance. Instead, we use the vanilla CLIP model and employ basic transfer learning methods that we can em- ploy with limited hardware resources, which also demon- strate the effectiveness in the scope of lower-cost applica- tions. 2.3. Fine-tuning and transfer learning Depending on the application, CLIP has two different ways to adapt to a new distribution, i.e., new sets of data entirely outside the dataset on which CLIP was pre-trained. Fine- tuning and transfer learning are very similar ways to adapt CLIP, but they have different applications depending on the task at hand and different processes in modifying the archi- tecture. Fine-tuning consists of training all layers or at least parts of the model.
8322bfaf-71c2-4ec1-b80a-9a498ec4476c
This process is usually more suitable for adapting to small sets of data that are closely related to the dataset CLIP was pre-trained on, such as everyday objects and general concepts. On the other hand, in tasks where the dataset is too specific, i.e., specialized knowledge, transfer- learning is better suited, as it freezes all the original layers of the pre-trained model and only adds or injects extra train- able layers or parameters. This way, the learned features of 4
3925cbe7-e2c3-4e9b-a56c-d12057dfe45f
(2) Web crawling (3) Pre - filtering (4) Processing (5) Post - filtering (6) Downloading (1) Online catalogs [ {…}, {…}, … ] LLM [ {…}, {…}, … ] Figure 4. Dataset generation pipeline resulting in the Industrial Language-Image Dataset (ILID). the zero-shot model are preserved and optimized for gen- eralization to novel, previous out-of-distribution data. Usu- ally, the fine-tuning process requires much more resources in terms of time, data, and computation as it modifies all the layers of the model compared to transfer learning. In the case of ILID, transfer learning proved to be a fitting so- lution, as the dataset is specialized specifically on industrial components, which are not presumably contained in CLIP’s dataset used for pre-training.
f1c46544-8d6d-41fa-8fd4-7e9dd99463b7
Notable works in transfer-learning of CLIP are adapter- styled tuning, e.g., CLIPAdapter [52], and prompt learning, e.g., CoOp [53] and APEX [54]. CLIPAdapter (s. Fig. 5) adds dense down- and up-sampling layers on top of CLIP either to the image, text, or both encoders. Thereby, only the most prominent features are compressed into lower di- mensions. From the latent space, the adapter then learns to reconstruct the essential ones. CoOp (s. Fig. 5) is the first to demonstrate continuous prompt learning as in- troduced by [55] for CLIP, which is learning continuous prompts by backpropagation for each label or one specific prompt template for all labels.
475e70ad-9476-4c0f-9199-45f497467194
Concretely, CoOp creates a set of learnable vectors, initialized by random values or given text embeddings, which, during training, the model adapts to. APEX is the most recent approach that also eval- uates adding learnable tokens to the residual transformer blocks in the image encoder. Besides, APEX introduces a residual connection skipping the text adapter steered by an adaptive coefficient to perform better on a variety of out-of- distribution data. 3. Method In this section, we first outline the generation of the ILID, including a thorough outline of the dataset acquisition, the criteria for data selection, web crawling to gather extensive sets of unlabeled data, and filtering (s. Sec. 3.1). Secondly, we elaborate on the decision for the model architecture and training procedure in Sec. 3.2. 3.1.
a08a7bd6-0dc8-49b2-a50b-c45bcff0d968
Dataset generation pipeline Following a typical data pipeline structure, including data selection, transforming, and pre-/post-filtering (s., e.g.,[22]), we employed six steps (s. Fig. 4) to generate the In- dustrial Language-Image Dataset (ILID). Each of the steps results in a structured JSON document containing all the outputs. The next step always takes the respective docu- ment as input. 1. While searching for reasonably organized industrial- related data on the Internet, we found that online cata- logs contain relevant language-image information. Typ- ically, web stores have a page per product, sometimes imaging a set of product configurations, a precise, of- ten standardized, title, description, information about the material, and further information about the product. These online stores are an adequate data source for the industrial domain. The first step was identifying a store set containing the necessary object-domain.
fa851881-cb33-4fbe-8ec8-be2080599eeb
2.Web crawling data from online catalogs follows two ba- sic steps: getting the sitemap from robots.txt and writing a crawler for the specific structure of the product pages. The top-level robots.txt file delineates the Robots Exclu- sion Protocol , which guides crawlers and other bots on which sections of the website they are permitted to ac- cess. Typically, this file also specifies the location of the sitemap, an XML-formatted document designed to provide crawlers with information about all pages on a website. Sitemaps can be hierarchically ordered; in the case of online catalogs, typically, there is one specific sitemap containing all products and their respective lo- cations. We use Scrapy3as a Python-based web crawler that takes a sitemap as input and crawls through all the specified locations. Creating a specific spider for a web catalog requires manual intervention since one has to de- fine which images and text blocks to yield.
72e39c3d-be61-4221-bc66-a394bd766afd
Besides a central label tag for each entry, we save an unstructured list-typed data object, which can contain all other avail- able information about the product, like materials, finish, colors, etc. Using the sitemap as the initial crawling en- try point is a common step in every online search engine. 3. In the pre-filtering step, we filter for duplicate entries, remove special characters, as well as diminish entries that do not have sufficient information. Besides, we filter the data for a set of trade names and remove these from all product information. Often, industrial product names include the manufacturer, which we do not want to use further or bias the data within the following information extraction. 4. In the central processing step, we use a small local- deployable LLM to extract our five target information from the unstructured data.
0e657c86-7ced-4273-830b-b4c34fe59fa3
We define these as (1) a long label describing the product, (2) a short label that is shorter than the long label, (3) a description of the prod- uct, (4) the material, (5) the finish or color of the product (s. also Fig. 2). In our study, we used Llama3-8B [19] 3Scrapy: A Fast and Powerful Scraping and Web Crawling Framework 5
efea53e0-fdc6-4cda-8645-4442bffde53c
in the fine-tuned instruct version (s. 6 for the respective prompt). We ask the LLM not to output any numbers or sizes; additionally, we remove them from the initial data since, on the one hand, we do not expect that a 2D image task can identify or recognize any dimensional quantities given different camera positions and varying intrinsics, on the other hand, we do not want to bias the dataset with it. After prompting for the desired information, we extract these from the response and save them for further processing. We discard the item from the dataset if the prompt does not return sufficient output. 5. In the post-filtering step, we again filter for any un- wanted characters and do some further cleaning, like lowering words. 6. In the final downloading step, all images are down- loaded, post-processed, and resized while also assem- bling the final JSON specifying the dataset’s text and metadata.
4c4ab7e9-826f-4c0d-833c-a3e09d3c115a
With the given steps, we are able to extract a product’s image and a structured set of five pieces of information. Be- sides, we observed that even a small model such as Llama3- 8B in its instruct fine-tuned version is mostly able to extract the demanded information from the bunch of unstructured text. We show an excerpt of the dataset in 7. 3.2. Transfer learning 3.2.1 Model architecture As we already outlined in Sec. 2.3, we adopt a sim- ple yet effective strategy for transfer learning from CLIP’s in-distribution to our ILID dataset. Within CLIP’s dual- encoder setup, we must utilize a strategy for the image and text stream. Fig. 5 depicts the used model architectures.
538c8a47-e1a5-470e-b698-5748ee193530
While we estimate that the images we want to learn but also infer from show similar characteristics as CLIP’s in- distribution data compared to other fully out-of-distribution image data as in the case of, e.g., PatchCamelyon [41] (s. Sec. 2.2.2), we employ on the image stream only a sim- ple trainable adapter as proposed by [52]. We tuned the mixing coefficient manually; we observed that a low αcan vastly result in overfitting, while a high value does not necessarily increase the performance significantly during cross-validation. That is why we chose a balanced value of α= 0.5. The adapters reduce the feature by 4as proposed in the original paper [52]. We omitted testing prompt tun- ing on the image stream as introduced by APEX [54] since we estimate a relatively low distribution shift from the CLIP dataset to ILID regarding the images.
1f5f327c-74ec-44ab-95d4-8cfae66b27af
In contrast, prompt engineering is a crucial task for learning, as well as inference with textual, promptable mod- els. In a preliminary study, we have already observed that vanilla CLIP performs differently, given different prompt templates like ”a photo of {}. ”compared to ”a photo of{}, an industrial product. ” The difference from the minor change results from the prompts CLIP was pre-trained with, which follow similar characteristics. Having not to dis- cretely prompt-tune manually motivated us to utilize CoOp [53] as a continuous prompt learning method. Besides, we also evaluate in the experimental study (s. Sec. 4) the per- formance of adding an additional adapter to the text stream.
8b3feab4-6369-4f1c-bc12-6199de352bc7
3.2.2 Training During the pre-training of CLIP, a very large minibatch size of 32,768was used, which took for the largest Vision Transformer (ViT) configuration (428M parameters) a to- tal of 12 days on 256 V100 GPUs [23]. Compared to the pre-training, during transfer learning with CoOp, we have a total of cn×512trainable weights ( cn=number of con- text vectors), which can be managed on a single consumer GPU in a reasonable time. However, the batch size has to be chosen wisely from the memory point of view, as well as by looking at the dataset labels. Given 32k samples per minibatch out of a total of 400M, the chance, utilizing random sampling, that non-contrastive samples are included in one minibatch is negligibly slight.
6176c154-1523-4a2a-b1aa-dede6f3b6c30
In contrast, fine-tuning or transfer learning approaches typ- ically contrast all possible class labels against a set of im- ages [52–54, 56] during the benchmark studies on datasets like ImageNet [57], which is why non-contrasting samples are not possible as long as the classes are conceptually far away from each other. The assembled ILID dataset does not have any class concept, meaning that we, as a priori, do not know how two samples and their labels are semanti- cally close to each other. Contrasting a set of images against all possible labels is infeasible memory-wise; that is why we can not follow this training method and only contrast the images and their labels inside one batch as done dur- ing pre-training. This change led us to employ a different optimizer from the one used in the original CoOp imple- mentation since Stochastic Gradient Descent (SGD) would not converge given the smaller batch size.
501e42cb-a4c1-4765-9bfb-b94773f45408
We changed from vanilla SGD to Adadelta [58], an SGD optimizer that adapts learning rates over time using only first-order information. 4. Experiments In this section, we present a series of studies utilizing ILID, designed to evaluate the effectiveness of the dataset and transfer learning approach for different tasks. We begin with the dataset properties (s. Sec 4), describe the exper- imental setup (s. Sec. 4.2), and present quantitative results on cross-validation (s. Sec. 4.3) as well as training and in- ference on a different label type (s. Sec. 4.4). Further, we present the results of a downstream task on segmentation (s. Sec. 4.5). 6
337bb4be-b7a5-40b3-ac5e-3ed9183af4c9
Adapter + 1 - α α Adapter Image Encoder Text Encoder Adapter Prediction „…hinge…“ „…handle…“ „… rod end…“ [P 1 ] [P 2 ] [ P x ] [..] „hinge“ Image Encoder Text Encoder Prediction Learnable Frozen (a) CLIP Adapter (b) CoOp Figure 5. The architectures used in this work: (a) CLIPAdapter [52] and (b) CoOp [53]. 4.1. Dataset For the presented ILID, as of now, we crawled five different online shops, resulting in 12,537valid samples, including a diverse range of products ranging from standard elements small in size like hinges, linear motion elements, bearings, or clamps to larger ones, like scissor lifts, pallet trucks, etc. (an excerpt is depicted in 7).
0549c4e5-cd6e-49f3-b339-ab6d3971b3cc
steel stainlessclampplungerclampingleverball indexingadjustablewith aluminumhandleknob connectorspringhingelatchprofileplastictogglelinearhandswivelgrip bearinggear assemblyvalve handlesstarfeet handwheellevelingscrewlockplaterollersetjoint aluminium103 Figure 6. Top- 40word occurrences in label label short . Fig. 6 depicts the top- 40word occurrences in label la- belshort , showing that typical concepts of industrial stan- dard parts like clamp ,lever ,handle ,knob ,hinge , orswivel are pronounced represented but also material types ( steel, aluminum /aluminium ) and properties ( stainless ) as well. Tab. 1 lists the number of unique labels per label cate- gory and hints at the dataset’s diversity. Obviously, with in- creasing words (on average: label short <label long<de- scription ), the number of label-wise unique labels increases.
bb6c01d7-9118-4e87-8855-d7487f3c3ad3
So, nearly every sample has a unique description , but only two labels, on average, share the same label short . Since we do not account for minor preposition words like a/an/the in the labels, the labels are slightly more equal on the se-mantically level. However, we estimate a good diversity in the dataset, and since we do not account for preposition words in the counting, at least three to four samples are in- cluded per semantical similar class, which should suffice for a tuned CLIP to outperform fully supervised models (s. Sec. 2.2.2). We use the presented version of ILID in the following experiments. Table 1. Number of unique labels per label category. label short label long material material finish description 6785 8476 2899 3375 11452 4.2.
a7b29b64-6277-48a8-917d-eb2ee904c2b1
Setup We build upon the code base of Dassl [59, 60] and trained on a single 4090 GPU. We chose random sampling, an image input size of 224×224, and CLIP’s pre-trained ViT-B/16 as the image encoder instead of the ResNet version, as ViTs have much less image-specific inductive bias than CNNs. We initialized CoOp with a context length of cn= 10 if not otherwise stated. We trained with a batch size of 64while testing with only 32samples. This increases the accuracy during validation/testing compared to training accuracy, but we feel this is more realistic in the case of real-world ap- plications contrasting only 32different conceptual object classes in one application. We applied common data aug- mentation techniques of randomly resizing and cropping as well as flipping edges during training. Besides, we normal- ized ILID’s image data.
8056e75c-3cb2-47c3-89f5-e0214ca27a68
We use Adadelta [58] with a learn- ing rate of 0.15and a cosine learning rate scheduler with a weight decay of 1e-3. Besides, we used 3warm-up epochs with a constant learning rate of 1e-2to prevent rapid param- 7
40c40951-06d9-45f5-9af1-8afcaa0da539
eter changes in the initial training stages, which can lead to early overfitting. 4.3. Quantitative results Since we do not have a different real-world language- image dataset at hand, we used 6-fold cross-validation during the evaluation of the different model architectures. Fig. 7 depicts the validation results of training on the label short and label long with CoOp, CoOp + image adapter (CoOpIA) with αi= 0.5, CoOp + image αi= 0.5 and text adapter αt= 0.2(CoOpIATA), image and text adapter only (CLIPAdapter) (same αs), and the zero-shot CLIP performance. All accuracies are derived from the top-1 predictions. Additionally, we listed the top-3 ac- curacies for zero-shot CLIP and CoOpIATA (dashed lines). A first observation ( ▷Obs.
23165372-5e95-4fa7-84d6-b55040da54f6
1) is that all transfer learning approaches effectively outperform CLIP’s zero-shot capa- bilities, even the top-3 accuracies after training for ≈20 epochs, highlighting that the ILID is out-of-distribution. Even training on the less information-rich label short out- performs CLIP’s zero-shot capabilities. CLIP highly depends on the chosen prompt template (▷Obs. 2). If we look at training on the label long, the zero-shot (prompt template ”a photo of an industrial prod- uct{label long}”) accuracy is lower than all other trained methods initialized with random weights in the adapters and CoOp with ”X X X X a photo of an industrial prod- uct{label long}”, which tokens will be optimized except for the label. CLIP performs poorly if given a prompt that deviates much from the ones during pre-training. That es- pecially accounts for combining the prompt template with multi-word product descriptions.
409cd147-2f6e-4950-ae00-1b4e15ce643a
As expected, the more trainable weights we add, the better the model adapts to the data, while the overall do- main generalization to the in-distribution data achieves in the case of label short andlabel long an accuracy of maxi- mum 79.93% and84.31%, respectively, an image adapter is crucial to effective transfer learning in this case ( ▷Obs. 3). Moreover, the top-performing model also depends on prompt learning, but adapter-styled tuning performs better than only prompt learning on the text stream ( ▷Obs. 4). However, adapting to images will reduce the model’s per- formance on slight to out-of-distribution data. That means inference on images that vastly differ from catalog-style ones will definitely have lower performance than the in- distribution images. However, the trained model will still outperform zero-shot CLIP, as we will see in the following sections.
93a6d58b-205f-4186-a25d-6a872dfa52be
To gain an understanding of how transfer learning af- fects the embeddings further, we derived the image and text embeddings after training on the full ILID given the label label short for100 epochs. Fig. 8 visualizes thehigh-dimensional embeddings of the same 100 samples. With each transfer learning method, adding more trainable weights, the text and image embeddings more jointly share the same embedding space. Further, multiple text embed- dings get so close that they are hard to distinguish in the t-SNE diagram at all, which we estimate follows that the transfer learning approaches learn to group semantically close concepts, while in the zero-shot case, these are still more widely clustered. Moreover, image and text embed- dings are much more pronounced after transfer learning than in the zero-shot case. 4.4.
5b9e24eb-f3a0-4c94-a429-f92a8be0d001
Prompting for materials Besides training and testing on the label short and la- bellong, we additionally trained CoOpIATA for 100epochs on the material label with the initial prompt ”X X X X a photo of an industrial product with material {}”. We then evaluated the zero-shot and CoOpIATA performance on the images depicted in Fig. 9 while choosing for the zero-shot test a prompt template similar to for training CoOpIATA. The results are listed in Tab. 2. Surprisingly, CLIP’s zero-shot performance shows 2out of5true positives, while the transfer learning result in 5out of5. Further, looking at the scores, we see that our proposed transfer learning method produces much higher confidence in every case, which follows that the different concepts of materials are not in-distribution in the zero-shot case.
de75c89f-f3a9-454a-8a17-9d66c4efac40
In- terestingly, a prompt including {aluminum }results in lower scores than using the word {aluminium }, which points out that the subtleties or discrepancies of the language used in an industrial context are not mapped after the transfer learn- ing nor in the zero-shot case. That is why we added both words in the prompts. Further, after transfer learning, judg- ing based on the scores, there is still slight confusion be- tween the material concepts of aluminum and polyamide as well as polyamide and brass. We estimate that the trans- fer learning introduced a specific object-material-awareness but is still heavily influenced by other image characteristics, like, in our case, the yellow taint. The given task might not serve a real-world industrial vi- sion use case at this stage, but it shows how ILID can serve different tasks at hand by combining images with different (broad) language information during training.
6febc2bb-52b7-4d86-a084-b70a1aea7cdd
These results again underline a natural language supervised VFM’s rich multimodal capabilities. 4.5. Language-guided segmentation A typical downstream task is a language-guided segmen- tation utilizing the Segment Anything Model (SAM) [61]. SAM is a class-agnostic point promptable image segmen- tation model that outputs hierarchical masks and predicted Intersection over Unions (IoU). Without the need for man- ual intervention, an automatic mask generation pipeline can 8
80760d96-3b98-4193-8dd1-95de3b02da0b
10 20 30 40 50 60 70 80 90 100 Epoch2030405060708090x-val/acc (%)(a) Label Short CoOp (top-1) CoOpIA (top-1) CoOpIATA (top-1/3) CLIPAdapter (top-1) Zero-shot CLIP (top-1/3) 10 20 30 40 50 60 70 80 90 100 Epoch2030405060708090x-val/acc (%)(b) Label LongFigure 7. Results of 6-fold cross-validation during transfer learning using different approaches on the ILID. Zero-shot CLIP Image embeddings T ext embeddings CoOpIA CoOpIATA Figure 8. t-SNE diagrams from the same randomly selected 100samples (CoOpIA and CoOpIATA were trained for 100epochs on the full ILID given the label label short ).
65434ad0-b437-4657-a987-cacb6c2822f1
sample a point grid and subsequently use Non-Maximum Suppression (NMS) to diminish through merging a large set of masks to form more precise proposals. In the sim- plest form, language-guided image segmentation based on SAM and CLIP can be employed by applying CLIP onto all generated masks, which we cut out with a particular delin- eation factor. CLIP’s softmaxed logits can then be thresh- olded to get the final per-mask class-wise predictions. We only contrasted the object to prompt against an empty class label. Contrasting only two prompts is challenging since the model’s overconfidence in one of them is the most pro- nounced. We chose to do so to avoid any bias by introducing hard negative prompts. For the language-guided segmentation, we used a CoOp- IATA model trained on the complete ILID dataset given the label long for40epochs.
e61413d8-e564-4d8c-8155-334851f1a586
For completeness, it should be mentioned that we did not compare it against the other ap- proaches, e.g., CLIPAdapter. Fig. 10 depicts the segmentation results in a challenging scene composed of multiple collets stacked on a trolley. The zero-shot results do have many true positives, but overall, we are not able to observe any further prediction patterns.In contrast, the transfer learning approach can effectively distinguish between a mask containing a collet and a mask that does not. With only 17word occurrences of ”collet” in ILID’s label long labels, the resulting model’s confidence compared to zero-shot CLIP effectively demonstrates the proposed method. Additionally, the images relating to the labels do not contain collets of the same shapes and sizes, which emphasizes CLIP’s learned rich representations. We discuss two further examples in 8. 5.
232e949e-3543-421e-8ba4-a5194f23808a
Conclusion and Outlook Using VFMs as a building block in an industrial vision ap- plication is a promising and transforming technique, im- proving systems’ accuracy, speed, and reliability, e.g., in- volved in inspection, robotic control, parts identification, and process control, leading to enhanced operational effi- ciencies and product quality. As we outlined in Sec. 2.1, up to this date, literature only has a limited number of use case ideas regarding using VFMs in industrial applications, which we want to motivate further. This work strived to make a step towards enabling em- ploying VFM in industrial machine vision applications by 9
502cc711-e1ad-46d0-90c9-68e606cfdb1e
(a) (b) (c) (d) (e) Figure 9. Five different real-world images used for prompting material properties. Input Zero-shot CLIP Ours Figure 10. Language-guided segmentation results given prompt ”collet” compared to zero-shot CLIP under the same settings (segmentation properties and thresholds). Table 2. Scores on predicting the object’s material properties in the images from Fig. 9 (bold indicates the highest scores; underlined values correspond to the ground truth). (a) (b) (c) (d) (e) Zero-shot CLIP ”steel” 0.024 0.113 0.330 0.168 0.059 ”polyamide” 0.149 0.196 0.062 0.107 0.208 ”thermoplastic” 0.245 0.141 0.050 0.034 0.
b2896e49-1897-42de-bfa3-3687981694b2
097 ”aluminum or aluminium” 0.043 0.143 0.166 0.238 0.094 ”anodized aluminum or 0.030 0.143 0.070 0.064 0.023 aluminium” ”plastic” 0.352 0.244 0.099 0.107 0.280 ”brass” 0.156 0.020 0.223 0.282 0.240 CoOpIATA trained on the material label ”steel” 0.007 0.033 0.950 0.829 0.137 ”polyamide” 0.135 0.368 0.004 0.008 0.361 ”thermoplastic” 0.010 0.004 0.002 0.001 0.160 ”aluminum or aluminium” 0.009 0.085 0.
39ce2dc5-052b-4398-b094-3eeeb721dacb
020 0.011 0.001 ”anodized aluminum or 0.007 0.374 0.003 0.007 0.001 aluminium” ”plastic” 0.694 0.135 0.008 0.041 0.077 ”brass” 0.139 0.000 0.012 0.104 0.264 introducing the Industrial Language-Image Dataset (ILID) to bring industrial context into CLIP and evaluating ef- fective self-supervised transfer learning from the dataset. We demonstrated this by evaluating downstream tasks from prompting for material properties to language-guided seg- mentation. With only a limited dataset size of ≈12ksam-ples, the results show promising opportunities in machine vision applications when increasing the dataset size or fur- ther restricting it to more specific domains.
618b1138-0a2c-4b55-bb2a-bc72dca91dad
One can argue that the bigger digital giants like OpenAI or Meta can also incorporate industrial data during the train- ing of their models; however, the overall proposed method from dataset curation to fine-tuning CLIP also suits, e.g., companies with intellectual property constraints or limita- tions in available computing resources in employing VFMs. Nevertheless, fine-tuning expert models for specific tasks is a common step in creating an AI application, which we, e.g., showcased, given the transfer learning from material properties. Future work must also elaborate on training with ILID’s other labels, like description , to further discuss op- portunities for other applications. The current limitations we observed on the text stream are especially the limited learned language subtleties and discrepancies as they occur in industrial contexts.
bcb5b328-fb24-4104-97d8-fe4dcf0e0875
The con- fusion between the same concept but differently termed in American (aluminium) and British (aluminum) English shows that there is a need for pre-training of the text encoder with broader natural language, e.g., even with extended con- text, which would enable not only training on shorter image labels. Further, on the image stream, we observed that the model generalizes well to a variety of an object’s different views but does less perform well when contrasting between finer-grained different object types. Here, a custom expert model is probably more suited than transfer learning from a dataset that includes many different object concepts. The 10
3dba94ab-c179-40e0-bf74-4b3bfc2ab21c
most limiting characteristic is including or inferencing with dimensional quantities, which can hardly be solved when training on images captured with different cameras and their individual intrinsics. With this work, we hope to encourage the industrial com- munity to employ and work on using VFM in the industrial domain more and more. Therefore, we publicly provide ILID and the code used during training. In the future, we plan to continue increasing the dataset size by incorporat- ing more web catalogs. Acknowledgments This work is part of the research project Intelligent Digital Cabin Twin (InDiCaT) under the grant number 20D1902C, supported by the Federal Ministry for Economic Affairs and Climate Action (BMWK) as part of the Federal Aeronautical Research Programme LuFo VI-1 . We thank M ¨ADLER GmbH for granting us the rights to use some of their product images (included in Fig.
6d36576a-041a-4ef7-9743-ac41309a839f
2, 3, 5, 11, and 12) in this publication. CRediT author statement K. Moenck: Conceptualization, Methodology, Software, Validation, Formal analysis, Investigation, Resources, Data Curation, Writing – original draft, Writing - review & editing, Visualization, Supervision, Project administration; D.T. Thieu: Conceptualization, Methodology, Software, Formal analysis, Investigation, Data Curation, Writing – original draft; J. Koch: Writing - review & editing; T. Sch¨uppstuhl: Supervision, Project administration, Funding acquisition, Writing - review & editing. References [1] D. Schoepflin, D. Holst, M. Gomse, T. Sch ¨uppstuhl, Syn- thetic training data generation for visual object identifica- tion on load carriers, Procedia CIRP 104 (2021) 1257–1262.
b76eaa09-c104-4621-bd39-252dffdf7d7f
doi:10.1016/j.procir.2021.11.211 . 1 [2] D. Schoepflin, K. Iyer, M. Gomse, T. Sch ¨uppstuhl, Towards synthetic ai training data for image classification in intralo- gistic settings, in: Sch ¨uppstuhl (Ed.) 2022 – Annals of Sci- entific Society, Springer Cham, 2022, pp. 325–336. doi: 10.1007/978-3-030-74032-0_27 . [3] D. Holst, D. Schoepflin, T. Sch ¨uppstuhl, Generation of syn- thetic ai training data for robotic grasp-candidate identifica- tion and evaluation in intralogistics bin-picking scenarios, in: K.-Y . Kim (Ed.
e4e5881a-3b57-4a76-910b-df1c5db494b4
), Flexible Automation and Intelligent Man- ufacturing, Lecture Notes in Mechanical Engineering Ser, Springer International Publishing AG, Cham, 2022, pp. 284– 292.doi:10.1007/978-3-031-18326-3_28 . [4] O. Schmedemann, M. Baaß, D. Schoepflin, T. Sch ¨uppstuhl, Procedural synthetic training data generation for ai-based de- fect detection in industrial surface inspection, Procedia CIRP 107 (2022) 1101–1106. doi:10.1016/j.procir. 2022.05.115 .
836bee01-b230-41d7-816a-f29142e42f08
1[5] B. Drost, M. Ulrich, P. Bergmann, P. Hartinger, C. Steger, Introducing mvtec itodd — a dataset for 3d object recog- nition in industry, in: 2017 IEEE International Conference on Computer Vision Workshops (ICCVW), IEEE, 2017, pp. 2200–2208. doi:10.1109/ICCVW.2017.257 . 1 [6] P. Bergmann, M. Fauser, D. Sattlegger, C. Steger, Mvtec ad — a comprehensive real-world dataset for unsupervised anomaly detection, in: 2019 IEEECVF Conference on Com- puter Vision and Pattern Recognition, IEEE, Piscataway, NJ, 2019, pp. 9584–9592. doi:10.1109/CVPR.2019. 00982 .
f33a3f83-6de4-44b1-a116-538b91b6a8b3
[7] P. Bergmann, K. Batzner, M. Fauser, D. Sattlegger, C. Ste- ger, The mvtec anomaly detection dataset: A comprehensive real-world dataset for unsupervised anomaly detection, In- ternational Journal of Computer Vision 129 (4) (2021) 1038– 1059. doi:10.1007/s11263-020-01400-4 . [8] P. Bergmann, K. Batzner, M. Fauser, D. Sattlegger, C. Ste- ger, Beyond dents and scratches: Logical constraints in un- supervised anomaly detection and localization, International Journal of Computer Vision 130 (4) (2022) 947–969. doi: 10.1007/s11263-022-01578-9 .
a57d219e-3d0f-44f0-857a-0058d21d3fa2
1 [9] H. Bai, S. Mou, T. Likhomanenko, R. G. Cinbis, O. Tuzel, P. Huang, J. Shan, J. Shi, M. Cao, Vision datasets: A bench- mark for vision-based industrial inspection (2023). doi: 10.48550/arXiv.2306.07890 . 1 [10] L. B ¨usch, J. Koch, D. Schoepflin, M. Schulze, T. Sch ¨uppstuhl, Towards recognition of human actions in collaborative tasks with robots: Extending action recognition with tool recognition methods, Sensors (Basel, Switzerland) 23 (12) (2023). doi:10.3390/s23125718 .
7c93d51f-228c-4b22-af93-35a69e079355
1 [11] J. Zhang, J. Huang, S. Jin, S. Lu, Vision-language mod- els for vision tasks: A survey, IEEE transactions on pat- tern analysis and machine intelligence PP (2024). doi: 10.1109/TPAMI.2024.3369699 . 1 [12] R. Bommasani, D. A. Hudson, E. Adeli, R. Altman, S. Arora, S. von Arx, M. S. Bernstein, J. Bohg, A. Bosselut, E. Brun- skill, et al., On the opportunities and risks of foundation models (2021). doi:10.48550/arXiv.2108.07258 . 1 [13] J. Devlin, M.-w. Chang, K. Lee, K. Toutanova, Bert: Pre- training of deep bidirectional transformers for language un- derstanding (2018).
9034e9b1-8472-4d09-bf93-953286f44b32
doi:10.48550/arXiv.1810. 04805 . 1 [14] P. Budzianowski, I. Vuli ´c, Hello, it’s gpt-2 – how can i help you? towards the use of pretrained language models for task-oriented dialogue systems (2019). doi:10.48550/ arXiv.1907.05774 . 1 [15] T. B. Brown, B. Mann, N. Ryder, M. Subbiah, J. Kaplan, P. Dhariwal, A. Neelakantan, P. Shyam, G. Sastry, A. Askell, et al., Language models are few-shot learners (2020). doi: 10.48550/arXiv.2005.14165 .
a5b7dc71-e655-4074-9a48-47548e2d008a
[16] J. Achiam, S. Adler, S. Agarwal, L. Ahmad, I. Akkaya, F. L. Aleman, D. Almeida, J. Altenschmidt, S. Altman, S. Anad- kat, et al., Gpt-4 technical report (2023). doi:10.48550/ arXiv.2303.08774 . 1 [17] H. Touvron, T. Lavril, G. Izacard, X. Martinet, M.-A. Lachaux, T. Lacroix, B. Rozi `ere, N. Goyal, E. Hambro, 11
770a1100-949f-4c87-8925-0d1fcc643f47
F. Azhar, et al., Llama: Open and efficient foundation lan- guage models (2023). doi:10.48550/arXiv.2302. 13971 . 1 [18] H. Touvron, L. Martin, K. Stone, P. Albert, A. Almahairi, Y . Babaei, N. Bashlykov, S. Batra, P. Bhargava, S. Bhosale, et al., Llama 2: Open foundation and fine-tuned chat models (2023). doi:10.48550/arXiv.2307.09288 . 14 [19] M. AI, Introducing meta llama 3: The most capable openly available llm to date (26.05.2024).
763e8fcd-cc58-4f20-ae7e-a894f0e41a38
URL https://ai.meta.com/blog/meta-llama- 3/1, 5 [20] M. Awais, M. Naseer, S. Khan, R. M. Anwer, H. Cholakkal, M. Shah, M.-H. Yang, F. S. Khan, Foundational models defining a new era in vision: A survey and outlook (2023). doi:10.48550/arXiv.2307.13721 . 1 [21] C. Schuhmann, R. Beaumont, R. Vencu, C. Gordon, R. Wightman, M. Cherti, T. Coombes, A. Katta, C. Mullis, M. Wortsman, et al., Laion-5b: An open large-scale dataset for training next generation image-text models (2022). doi: 10.48550/arXiv.2210.08402 .
47d6ea10-0f26-4c5b-a192-03741ae7f5d1
1, 2 [22] S. Changpinyo, P. Sharma, N. Ding, R. Soricut, Conceptual 12m: Pushing web-scale image-text pre-training to recog- nize long-tail visual concepts (2021). doi:10.48550/ arXiv.2102.08981 . 1, 2, 5 [23] A. Radford, J. W. Kim, C. Hallacy, A. Ramesh, G. Goh, S. Agarwal, G. Sastry, A. Askell, P. Mishkin, J. Clark, et al., Learning transferable visual models from natural language supervision (2021). doi:10.48550/arXiv.2103. 00020 . 1, 4, 6 [24] C. Jia, Y . Yang, Y . Xia, Y .-T.
b3390305-d11b-46b3-aee8-852376d0a028
Chen, Z. Parekh, H. Pham, Q. Le V , Y . Sung, Z. Li, T. Duerig, Scaling up visual and vision-language representation learning with noisy text su- pervision, International Conference on Machine Learning (2021). doi:10.48550/arXiv.2102.05918 . 1 [25] K. He, X. Zhang, S. Ren, J. Sun, Deep residual learning for image recognition (2015). doi:10.48550/arXiv. 1512.03385 . 2 [26] A. Hornberg, Handbook of machine and computer vision: The guide for developers and users, second, revised and up- dated edition Edition, Wiley-VCH, Weinheim, 2017. doi: 10.1002/9783527413409 .
d35886ed-fe60-44a0-87f0-6e15cd69da2d
2 [27] A. Naumann, F. Hertlein, L. D ¨orr, S. Thoma, K. Furmans, Literature review: Computer vision applications in trans- portation logistics and warehousing (2023). doi:10. 48550/arXiv.2304.06009 . 2 [28] K. Moenck, A. Wendt, P. Pr ¨unte, J. Koch, A. Sahrhage, J. Gierecker, O. Schmedemann, F. K ¨ahler, D. Holst, M. Gomse, et al., Industrial segment anything – a case study in aircraft manufacturing, intralogistics, maintenance, repair, and overhaul (2023). doi:10.48550/arXiv.2307. 12674 . 2, 3 [29] J. Wang, Y . Tian, Y .
459277bc-9089-4d4c-abd3-b89a57ebc086
Wang, J. Yang, X. Wang, S. Wang, O. Kwan, A framework and operational procedures for metaverses-based industrial foundation models, IEEE Trans- actions on Systems, Man, and Cybernetics: Systems 53 (4) (2023) 2037–2046. doi:10.1109/TSMC.2022. 3226755 . 3[30] H. Zhang, S. S. Dereck, Z. Wang, X. Lv, K. Xu, L. Wu, Y . Jia, J. Wu, Z. Long, W. Liang, et al., Large scale foun- dation models for intelligent manufacturing applications: A survey (2023). doi:10.48550/arXiv.2312.06718 .
44643e78-7005-4ac9-9550-3a06d7664f8c
3 [31] L. Makatura, M. Foshey, B. Wang, F. H ¨ahnLein, P. Ma, B. Deng, M. Tjandrasuwita, A. Spielberg, C. E. Owens, P. Y . Chen, et al., How can large language models help humans in design and manufacturing? (2023). doi:10.48550/ arXiv.2307.14377 . 3 [32] C. Picard, K. M. Edwards, A. C. Doris, B. Man, G. Gian- none, M. F. Alam, F. Ahmed, From concept to manufactur- ing: Evaluating vision-language models for engineering de- sign (2023). doi:10.48550/arXiv.2311.12668 . 3 [33] Y .
48913af9-3005-4e19-bb09-3a6c6c7fa73c
Mori, H. Takahashi, R. Oka, Image-to-word transforma- tion based on dividing and vector quantizing images with words, in: First international workshop on multimedia in- telligent storage and retrieval management, V ol. 2, 1999. 3 [34] A. Quattoni, M. Collins, T. Darrell, Learning visual rep- resentations using images with captions, in: IEEE Confer- ence on Computer Vision and Pattern Recognition, 2007, IEEE Computer Society, Los Alamitos, Calif., 2007, pp. 1–8. doi:10.1109/CVPR.2007.383173 . 3 [35] K. Desai, J. Johnson, Virtex: Learning visual representa- tions from textual annotations (2020). doi:10.48550/ arXiv.2006.06666 .
b7ca61c3-ea44-499f-b1ea-650aa6f92232
3 [36] M. B. Sariyildiz, J. Perez, D. Larlus, Learning visual rep- resentations with caption annotations (2020). doi:10. 48550/arXiv.2008.01392 . 3 [37] Y . Zhang, H. Jiang, Y . Miura, C. D. Manning, C. P. Langlotz, Contrastive learning of medical visual representations from paired images and text (2020). doi:10.48550/arXiv. 2010.00747 . 3 [38] S. Peng, K. Genova, C. Jiang, A. Tagliasacchi, M. Polle- feys, T. Funkhouser, Openscene: 3d scene understanding with open vocabularies (2022). doi:10.48550/arXiv. 2211.15654 .
eaf4e12f-26fb-4404-80f1-ae1309a34590
3 [39] P. Helber, B. Bischke, A. Dengel, D. Borth, Eurosat: A novel dataset and deep learning benchmark for land use and land cover classification (2017). doi:10.48550/arXiv. 1709.00029 . 4 [40] G. Cheng, J. Han, X. Lu, Remote sensing image scene clas- sification: Benchmark and state of the art, Proceedings of the IEEE 105 (10) (2017) 1865–1883. doi:10.1109/ JPROC.2017.2675998 . 4 [41] B. S. Veeling, J. Linmans, J. Winkens, T. Cohen, M. Welling, Rotation equivariant cnns for digital pathology (2018). doi:10.48550/arXiv.1806.03962 .
4afb6109-0b8e-4d2f-a02e-a46ef6aa886c
4, 6 [42] A. Kolesnikov, L. Beyer, X. Zhai, J. Puigcerver, J. Yung, S. Gelly, N. Houlsby, Big transfer (bit): General visual rep- resentation learning (2019). doi:10.48550/arXiv. 1912.11370 . 4 [43] T. Chen, S. Kornblith, K. Swersky, M. Norouzi, G. Hinton, Big self-supervised models are strong semi-supervised learn- ers (2020). doi:10.48550/arXiv.2006.10029 . 4 [44] Y . Li, F. Liang, L. Zhao, Y . Cui, W. Ouyang, J. Shao, F. Yu, J. Yan, Supervision exists everywhere: A data efficient con- 12
e3c3b183-f591-460b-9c5f-612f03b6cab9
trastive language-image pre-training paradigm (2021). doi: 10.48550/arXiv.2110.05208 . 4 [45] S. Goel, H. Bansal, S. Bhatia, R. A. Rossi, V . Vinay, A. Grover, Cyclip: Cyclic contrastive language-image pre- training (2022). doi : 10 . 48550 / arXiv . 2205 . 14459 . [46] X. Hu, K. Zhang, L. Xia, A. Chen, J. Luo, Y . Sun, K. Wang, N. Qiao, X. Zeng, M. Sun, et al., Reclip: Refine contrastive language image pre-training with source free domain adap- tation (2023). doi:10.48550/arXiv.2308.03793 . 4 [47] J. Yu, Z. Wang, V .
b3092cc1-6c8b-472e-b10a-3474f253675e
Vasudevan, L. Yeung, M. Seyedhosseini, Y . Wu, Coca: Contrastive captioners are image-text foun- dation models (2022). doi:10.48550/arXiv.2205. 01917 . 4 [48] Y . Rao, W. Zhao, G. Chen, Y . Tang, Z. Zhu, G. Huang, J. Zhou, J. Lu, Denseclip: Language-guided dense prediction with context-aware prompting (2021). doi:10.48550/ arXiv.2112.01518 . [49] L. Yao, R. Huang, L. Hou, G. Lu, M. Niu, H. Xu, X. Liang, Z. Li, X. Jiang, C. Xu, Filip: Fine-grained interactive language-image pre-training (2021).
a8240f23-30ad-4c44-ba62-e08f33a153a6
doi:10.48550/ arXiv.2111.07783 . 4 [50] N. Mu, A. Kirillov, D. Wagner, S. Xie, Slip: Self-supervision meets language-image pre-training (2021). doi:10. 48550/arXiv.2112.12750 . 4 [51] Q. Sun, Y . Fang, L. Wu, X. Wang, Y . Cao, Eva-clip: Im- proved training techniques for clip at scale (2023). doi: 10.48550/arXiv.2303.15389 . 4 [52] P. Gao, S. Geng, R. Zhang, T. Ma, R. Fang, Y . Zhang, H. Li, Y . Qiao, Clip-adapter: Better vision-language models with feature adapters (2021). doi:10.48550/arXiv.2110.
0937333d-8249-48d3-9045-30e46ac3b004
04544 . 5, 6, 7 [53] K. Zhou, J. Yang, C. C. Loy, Z. Liu, Learning to prompt for vision-language models (2022). doi:10.1007/ s11263-022-01653-1 . 5, 6, 7 [54] Y . Yang, J. Ko, S.-Y . Yun, Improving adaptability and generalizability of efficient transfer learning for vision- language models (2023). doi:10.48550/arXiv. 2311.15569v1 . 5, 6 [55] B. Lester, R. Al-Rfou, N. Constant, The power of scale for parameter-efficient prompt tuning (2021). doi:10. 48550/arXiv.2104.08691 .
f2a35e0f-5f86-427e-b745-4f3ca637cfee
5 [56] K. Zhou, J. Yang, C. C. Loy, Z. Liu, Conditional prompt learning for vision-language models (2022). doi:10. 48550/arXiv.2203.05557 . 6 [57] J. Deng, W. Dong, R. Socher, L.-J. Li, K. Li, L. Fei-Fei, Imagenet: A large-scale hierarchical image database, in: IEEE Conference on Computer Vision and Pattern Recog- nition, 2009, IEEE, Piscataway, NJ, 2009, pp. 248–255. doi:10.1109/CVPR.2009.5206848 . 6 [58] M. D. Zeiler, Adadelta: An adaptive learning rate method (2012). doi:10.48550/arXiv.1212.5701 .
c0d895ae-3d10-450e-81c1-6fa0850650de
6, 7 [59] K. Zhou, Y . Yang, Y . Qiao, T. Xiang, Domain adaptive en- semble learning (2021). doi:10.1109/TIP.2021. 3112012 . 7[60] K. Zhou, Z. Liu, Y . Qiao, T. Xiang, C. C. Loy, Domain gen- eralization: A survey (2023). doi:10.1109/TPAMI. 2022.3195549 . 7 [61] A. Kirillov, E. Mintun, N. Ravi, H. Mao, C. Rolland, L. Gustafson, T. Xiao, S. Whitehead, A. C. Berg, W.-Y . Lo, et al., Segment anything (2023). doi:10.48550/ arXiv.2304.02643 .
ff10f718-4361-40c8-84fe-03f5bfb3ba09
8 [62] H. Wang, P. K. A. Vasu, F. Faghri, R. Vemulapalli, M. Fara- jtabar, S. Mehta, M. Rastegari, O. Tuzel, H. Pouransari, Sam- clip: Merging vision foundation models towards seman- tic and spatial understanding (2023). doi:10.48550/ arXiv.2310.15308 . 14 13
59e422f0-5a1d-4530-8101-a848632edf1f
6. Llama-3 prompt We followed basic prompt assembly as described for Llama-2 [18] because up to the date of this publication, there has still been an in-depth explanation of Llama-3 missing. The Llama-2 chat version was trained with a va- riety of system prompts following patterns like ”You are a helpful, respectful and honest assistant. Always answer as helpfully as possible, while being safe. ” , we also included a similar one but tried to include the targeted domain. The brackets {{}} point out where we insert the data. Listing 1. System prompt used in the ILID generation pipeline’s text transformation step. You a r e a h e l p f u l a s s i s t a n t f o r a company t h a t s e l l s i n d u s t r i a l p r o d u c t s .
8a35fc0b-1b11-4724-ab0a-56272e090675
\n Do n o t ask f o r f u r t h e r d e t a i l s or s t a t e a d d i t i o n a l q u e s t i o n s . \n Do n o t add a d d i t i o n a l i n f o r m a t i o n or d e t a i l s t h a t a r e n o t g i v e n by t h e u s e r . \n Listing 2. User prompt used in the ILID generation pipeline’s text transformation step.
019286ba-0536-437e-a264-caa82dda1922
Summarize ’ Label : {{}} Text : {{}} ’\n r e t u r n i n g t h e f o l l o w i n g i n f o r m a t i o n : \n ( 1 ) a lo ng l a b e l or name of t h e p r o d u c t w i t h o u t i d s , numbers , codes , or s i z e s ( 2 ) a s h o r t l a b e l or name of t h e p r o d u c t w it h a maximum of 4 words and s h o r t e r t h a n t h e l on g l a b e l ( 3 ) d e s c r i p t i o n of t h e p r o d u c t w ith a maximum of 20 words w i t h o u t i d s , numbers , codes ,
b35c4027-8c31-4c57-97a3-21e8660cdb0e
or s i z e s ( 4 ) m a t e r i a l wi th a maximum of 5 words ( 5 ) m a t e r i a l f i n i s h / c o l o r w ith a maximum of 5 words 7. Excerpt from the dataset Fig. 11 and Fig. 12 depict each two samples from the ILID given the keywords ”hinge” and”locking assembly” . Based on the language label, we can observe that the LLM performs differently in extracting the relevant information. As an example, material andmaterial finish confusion oc- curs when the product page states more than one exact prod- uct configuration. 8. Additional language-guided segmentation results Fig. 13 and 14 show supplementary results on language- guided image segmentation. In Fig.
5c75097a-0975-4785-be35-fb32e69136f3
13, we prompted for”socket” , whereas zero-shot CLIP does not predict any mask as positive, while our approach segments all sockets. In Fig. 14, the results of our most challenging scene are depicted, in which we prompt for ”bracket for construction profile” .
22cea8fb-58d1-4306-8a2e-537883a30c34
The brackets are imaged far differently than the ones from catalog images, and sometimes they are barely { "id": "...", "image": "...", "label_short": "clevis mounting hinge", "label_long": "bracket hinge for clevis mounting", "description": "Rigid hinge for clevis mounting applications", "material": "rigid metal", "material_finish": "black oxide finish" } { "id": "...", "image": "...", "label_short": "adjustable hinge mechanism", "label_long": "hinge with adjustable friction mechanism", "description": "A hinge with adjustable friction for smooth motion and secure locking", "material": "zinc die casting", "material_finish": "silver plastic coating" }Figure 11. Two samples from the ILID given the keyword ”hinge” .
98de050c-0103-429f-a73a-4306f28510cd
{ "id": "...", "image": "...", "label_short": "locking qpq assembly", "label_long": "locking assembly with qpq coating", "description": "High corrosion resistance and improved fatigue strength for food safe applications", "material": "steel", "material_finish": "qpq-coated" } { "id": "...", "image": "...", "label_short": "stainless steel locking bar", "label_long": "locking assembly stainless steel bar", "description": "A locking assembly designed for industrial use made from high-quality stainless steel for durability and resistance", "material": "stainless steel aisi", "material_finish": "stainless steel finish" } Figure 12. Two samples from the ILID given the keyword ”locking assembly” . visible.
16b66c62-6e31-4d8c-8910-cb58586b595b
At first sight, the results do not show good perfor- mance, especially since we have a few non-detected brack- ets and a few false positive predictions. We explain the false positive on the top with the cropping strategy, while we have no explanation for the false predictions on the lower right. The false positives can result from the axis-preserving cropping strategy of the used method, in which a cropped segment includes parts of the surroundings. A lot of the false positive segments contain parts of brackets. Employ- ing more sophisticated language-image segmentation meth- ods, like [62], based on CLIP and SAM that do not rely on such a straightforward cropping strategy could prevent such wrongful predictions. In contrast, we observed less perfor- mance during the segment classification with CLIP when the background was not included in the segments. 14
0aa5e298-dcff-4b44-80a3-1cb0265652f2
Input Zero-shot CLIP OursFigure 13. Language-guided segmentation results given the prompt ”socket” compared to zero-shot CLIP under the same set- tings. Input Zero-shot CLIP OursFigure 14. Language-guided segmentation results given the prompt ”bracket for construction profile” compared to zero-shot CLIP under the same settings. 15

This dataset was created using Corpus Creator. This dataset was created by paring a corpus of texts into chunks of sentences using Llama Index.

Downloads last month
40
Edit dataset card