Holonix is partner of iQonic EU project, with the role of IT technical partner in defining solution zero-defect manufacturingin opto-electronics field.

One cornerstone of the iQonic system is the achievement of interoperability for sensors and legacy system related data, enabling the information flow the field to the cloud modules in charge of intelligent prognostic and diagnostic actions. According to iQonic Overall Architecture, KBS represents the main submodule directly acting for managing information flow, data schema and metadata and consists of an demo s/w solution evolution of the IT infrastructure for data management of i-Live Machines.

It consists of many modules managing in particular

  • Data structure and metadata/semantics definition with specific Registry application
  • Real-time data streaming with a custom use of Kafka technology
  • Data persistency/retrieval with a specific big-data db exposing REST interface.

After IT development work made in the previous months, KBS is currently available, cloud deployed, up and running currently supporting two use cases.

In particular, PRIMA Electrodemonstrator refers to a high-power laser multi-emitter diode production process for the multistage zero-defects manufacturing of laser sources with increased quality. One of the main areas of action is related to the improvement of the diode module assembly strategy by exploiting the automation capabilities and data gathering features of the FiconTEC machine in order to optimize and control the product configuration depending on the on-line data gathered on the mounted diodes and components, in order to reduce the emergence of a defective module and to the production of defects before they are observed. This is being achieved with a proper combination of more iQonic modules, including the KBS exploited in terms of data gathering and storage system.

iQonic KBS manages several types of objects and enables data collection from connected devices. A device type is declared mapping FiconTEC machine installed at PRIMA premise and instances of the type are created to collect data from the different devices. Data series are declared to manage the different type of data of the use case, each data series can be configured with several options to handle correctly the large amounts of data and the dependencies define the structure of the data exchange through a schema/structure formalization. According to object definition, data are processed by dedicated services. Data received from connected devices are validated and pushed to the streaming platform (Kafka cluster), so that they can be made available for further processing or recorded on long-term storage. A layer of REST APIs is exposed and several endpoints are available for different types of queries: avatar endpoint provides a quick representation of the last state of a machine/device, while the history endpoint provides access to large amount of data processing and filtering them by specific data series or devices.