The Future of manufacturing

This is what the future looks like in 2030:

“AI-centric control systems run manufacturing plants by optimizing high-level objectives and following constraints and other instructions from human operators. Human operators spend most of their time on creative thinking and strategic planning, which may turn into new economic objectives for AI systems.

Operators only need to monitor plant performance and intervene for critical operations, and AI systems automatically take care of all other things like process optimization, design of experiments, process variability, and predictive maintenance in an optimal way.”

In other words, we believe semi-autonomous manufacturing, or more precisely, self-optimizing manufacturing, will be achieved by 2030. This future comes with enormous benefits: increased productivity and efficiency, improved quality and safety, enhanced flexibility and adaptability, etc. Despite all these advantages, there are many challenges that remain to be solved today: 

  1. Low data quality

    This includes problems like too small data sets, low-quality data collection or integration, and noisy or inaccurate data. Good quality data is the foundation of any AI system, while data with poor quality drastically increases difficulty when training AI models.

    A scalable, resilient, and robust data hub is undoubtedly the most critical piece to achieving autonomy.
  2. Lack of advanced optimization algorithms

    It can be very complex to optimize a plant with many units of operations running simultaneously. Model Predictive Control or Reinforcement Learning algorithms typically need good system dynamics simulations and deep expertise, which cannot scale and can become very expensive.

    A better framework that does not require system dynamics or a large amount of data that can yield a scalable and much cheaper solution, needs to be widely adopted. 
  3. Many tasks currently require heavy human involvement

    For example, product design, development, process analysis, and reliability are usually full-time jobs for a team of highly specialized people.

    Each task may need a dedicated tool that is sufficiently intelligent to accomplish all the routine work and minimize the work needed for critical operations. 

And the best solution to these challenges is tackling all of them with one complete product. A complete product can minimize the integration cost while maximizing the return of AI systems.

Specifically, attacking each problem with separated vendors or internal teams could result in a poor outcome, mainly due to these two reasons: 

  1. Optimization cannot be done on weakly integrated subsystems with each having different business interests

    The whole self-optimizing system can be thought of as a 3-layer system: data, analytics, and optimization. Data enables analytics, and they feed into optimization. And optimization improves data and analytics.

    They form a virtuous cycle if they work tightly together and evolve towards the same end goal. When we have multiple vendors for each layer, there might be too many barriers to having their products work as closely as needed.

    But more importantly, it becomes too difficult to ask them to build their products with the belief that it is a good idea to sometimes sacrifice their own benefits for the sake of the whole system.

    For example, the optimization vendor may need data integration done in a certain way that significantly increases the cost for the data vendor and puts them out of business. Although there is a significant gain for the whole system, it is unlikely any data vendor would be willing to do that. 
  2. Data from just one organization cannot unleash the full power of AI.

    Training a large foundation model and then fine-tuning it with small data for a specific task yields the best possible result. A foundation model is a model that is trained on a massive amount of data and extracts common knowledge from the data.

    Fine-tuning a foundation model asks the model to learn further about some specific tasks. Given that it has already learned the common knowledge, fine-tuning takes only a small fraction of data to achieve superior performance.

    It would be ideal if all the data from all the manufacturing organizations can be “pooled” to train the foundation model, but it is extremely unlikely given almost every data point is proprietary in manufacturing industries.

    On the other hand, the data volume or velocity of any organization may not meet the needs to train the foundation model. Instead, a third-party product that trains the foundation model while complying with all the data protection and privacy acts as well as regulations, should come into play here.

    It is a win-win situation for both the manufacturing organizations and the third-party product. The manufacturing organizations receive much better results while keeping the competitive advantages since the ones with better data will get better results from fine-tuning.

    In other words, the organization with the best data still achieves the best outcome while every organization now gets an upgraded performance. And for the product that manages the foundation model and fine-tuning, it becomes a very successful business, hence the company will be well motivated to build the best AI systems.

    More importantly, this creates another virtuous cycle. Namely, the more organizations engaged, the better the foundation model and AI systems get, which, in turn, attracts more organizations to participate. 

Hence, although it is conventional wisdom that businesses, especially startups, should focus, focus and focus, the ones with narrow focuses could only be ahead of the competition in the short term but will quickly lose the game once the ones with complete products enter the two virtuous cycles. 

It is without a doubt that building such a complete product is much harder. However, it comes with much bigger rewards! Quartic.ai is committed to making this future a reality.

Stay in touch