Solutions Factorycontact us
Turn scientifically-proven concepts into reality
Euranovians are crafters at heart. Euranova has a “solution factory”: it is an incubator where Euranova crafts its business solutions. This is where researchers, consultants and solution engineers work together. Our engineers develop prototypes designed by our researchers. Those prototypes and solutions are built to tackle the pain points observed by our consultants when working at customers. Thanks to the match between consultants and researchers, we can offer you the right IT-solution, designed to automate any of your business processes to support your digital transformation.
An end-to-end big data solution
Digazu's end-to-end data engineering platform integrates the market’s best practices in the simplest way for users. Get straight to value with your data science and analytics projects. Digazu was designed to work at scale and integrate into your enterprise landscape, while its technological heart is based on modern data architecture and powerful technologies.
Story / By implementing data architectures with many leading companies, we noticed a convergence of the needs towards data hubs. Yet, we also realised that implementing a data hub required significant orchestration of components. At the same time, our research and development teams were identifying new technologies that are able to meet growing data engineering needs.
Digazu emerged from the need to make data usable and accessible for everyone and to create business value and efficiency. Digazu helps you become data-centric or develop your data culture and management to create added value.
It was crafted using cutting-edge technologies such as Kafka and Flink. These technologies were tested and selected by our research centre, as well as by digital giants such as LinkedIn, Netflix and Alibaba. Digazu is user-centric and user-friendly.
Elastic and scalable message queue for the cloud
RoQ is a message queue architecture and an algorithm able to scale out elastically to adapt with any applied messaging load.
Story / Back in 2011, IoT, cloud computing and on-demand resource usage schemes emerged gradually. The need for real-time interactions became evident.
At that time, existing message queues were only able to scale for a certain number of clients and were not able to scale out elastically.
In this environment, we felt it was the right time to fill in the gap. So, we introduced an elastic message queue under the name of RoQ. Using this system, a message could be distributed automatically and dynamically on newly-created instances when required, or redistributed to a subset of queue instance nodes.
At the time when LinkedIn was designing Kafka, we were incubating RoQ and we presented it at IEEE CloudComm 2011.
With this incubated solution, we have been able to get in-depth insights into message queues, into how to make them suitable for large scale applications, into how to build them around event-driven architectures and for cloud infrastructures management.
What’s in it for you? We can advise you on pros and cons and how to scale the infrastructure and the whole chain. Eventually, we can do the roll out for you.
Stored distributed graph database
STEFFI is a distributed graph database. When data is stored, STEFFI connects them and enables queries at speed.
Story / Back in 2013, we had the intuition that managing interactions would be key for our clients to develop a better understanding of their business, their customers and therefore to create greater value.
As a response, we incubated STEFFI, a scalable graph database. Similarly to a distributed version of Neo4j and Titan, STEFFI translated complicated traversal operations into large datasets.
We published our first STEFFI paper at IEEE Big Data in 2013 where the in-memory and indexing architecture brought real innovation.
With this incubated solution, we have been able to develop much more accurate churn detection machine learning algorithms in telecoms and in insurance while detecting anomalies in industrial processes, not to mention fraud and data governance systems.
What’s in it for you? We can implement graph mining in your system, advise you on what is at stake and analyse graphs rapidly.
Distributed Data Processing System
AROM is a distributed data processing framework based on data flow graphs, designed to address big data problems.
Story / Back in 2010, we had the intuition that value creation would go through data mining. One thing was sure, companies that would develop the capacity to process large volumes of data would gain great competitive advantage. AROM was crafted from this observation.
As Spark, our solution loaded big data, did computations on it in a distributed way, and then stored it.
In June 2011, we published a paper dedicated to our work on AROM followed by another one presented at IEEE CloudCom.
With this incubated solution, we were able to support our clients with solid and sustainable architectures. We designed the first data lakes for IoT platforms, the first data processing platforms for telecom operators, machine learning libraries for fully distributed marketing, and the first data hubs in banking and finance.
What’s in it for you? We know what lies behind a distributed data processing system, how to design it, how it works and how to set it up for any business.