Sparsity and machine learning for design and optimisation

Comments (0) Blog

To operate in real-time, complex simulation models such the ones implemented in STREAM-0D need to cut down computational times while saving the reliability of information. This is where the notion of ‘sparsity’ comes into play.

Among modelling and simulation tools, three-dimensional Computational Fluid Dynamics and Mechanics have become omnipresent in the lives of engineers and scientists. Whether it comes to optimise the aerodynamics performance of a turbine blade in a reactive flow or predict the average life expectancy of a composite part subject to mechanical failure, simulation is the universally accepted answer.

Robustness and accuracy come with the cost of high-performance computing. The price to pay for predictiveness is the massive usage of computing resources and long CPU runtimes. When the time for decision making is crucial, it is unthinkable to wait hours or even days for the outcome of a single simulation, especially when designers are supposed to evaluate multiple ‘what-if’ scenarios.

In practice, design and optimisation are still done using charts and diagrams. These ‘rules-of-the-thumb’ of the engineering world are simply invaluable. They are the results of years of cumulated experience, based on trial-and-error more often than not.

Today, the real challenge for scientists is the question: “Can experience itself be engineered?“ Or, otherwise said, “Can knowledge be acquired on a systematic basis in order to create guidelines and best practices?“.


Making more from less

With the raise of artificial intelligence and computer thinking, we can leverage the power of machine learning through the notion of sparsity.

Sparsity is the fundamental idea that, through the right representation, information can be encoded using very little data. As a consequence, it is also reasonable to expect that the relevant data can be recovered with little computing effort, compared to extensive search strategies or trial-and-error learning.

The same principles that are applied in the afore-mentioned field, are no applied to industrial production lines, which can produce big amounts of information on the production process. STREAM-0D project’s goal is to predict in real-time the occurrence of defects in production lines in order to adapt process variables and optimise production, with the final aim of achieving zero-defect production. To do so, STREAM-0D relies on simulation models which are based on the very same concept of sparsity.

Models are fed with actual data from online measurements: based on the model prediction, they allow workers to control the critical steps of the line so to adjust the product to the exact design specifications, or to quickly change specifications for producing customized batches.


Computing v. human experience

As compressed sensing revolutionised the world of signal processing defying a notorious sampling theorem, we are now walking a road leading to compressed computing. Through a smart and active design of experiments we can discover the underlying physics principles of a complex simulation model and distill its accuracy into a new faster meta-model that is ready to be used for design applications.

Yes, the engineers of tomorrow will be still looking into charts and diagrams the same way we do today, but those charts will embody the combined power of high performance computing and human experience.


This article was provided by ECN – Ècole Centrale de Nantes, STREAM-0D project consortium partner.


Follow STREAM-0D on: