Solutions Factorycontact us
Turn scientific-proven concepts into reality
Euranovians are crafters at heart. It incorporates a “solution factory”. It is an incubator where EURA NOVA crafts its business solutions. This is where the researchers, consultants and solution engineers are coming together. Our engineers develop prototypes designed by our researchers. Those prototypes and solutions were built to match the observed pain points, by our consultants, when they were working at customers. Due to the match between consultants and researchers we can offer you the right IT-solution, designed to automate any of your business processes to support your digital transformation.
An end-to-end Big Data solution
Digazu's end-to-end data platforms are combining a data lake, a data hub, and an MLOps platform. Bring holistic data management into your company with Digazu's platforms. It provides user-friendly interfaces while integrating with metadata management, security, and data quality tools.
Story / Data scientists are in short supply. In most businesses, they spend more time looking for data and implementing piping (aka data engineering) than doing what really makes them tick, and more importantly, what brings the most value to the business: building data science models.
Digazu emerged from this observation to make your data usable and accessible for everyone and create business value and efficiency. Digazu helps you to become data centric or develop your data culture and management to create added value.
It was crafted using cutting-edge technologies, e.g. Kafka and Flink, tested and selected by our Research Centre, as well as by digital giants, e.g. LinkedIn, Netflix and Alibaba, Digazu is user-centric and user-friendly.visit website
Elastic and scalable Message Queue for the Cloud
RoQ is a message queue architecture and an algorithm able to scale out elastically to adapt with any applied messaging load.
Story / Back in 2011, IoT, cloud computing and on-demand resource usage schemes emerged gradually and with it the need for real-time interactions became evident.
At that time, existing message queues were only able to scale for a certain number of clients and were not able to scale out elastically.
From this environment, we felt it was the right time to fill in the gap. So we introduced an elastic message queue under the name of RoQ. Using this system, a message could be distributed automatically and dynamically on newly created instances when required, or redistributed to a subset of queue instance nodes.
As LinkedIn designed Kafka, we incubated RoQ that we presented at IEEE CloudComm 2011.
With this incubated solution, we have been able to get in-depth insight into message queue, how to make it suitable for large scale applications, how to build it around event-driven architectures and for cloud infrastructures management.
What’s in it for you? We can advise you on pros and cons, how to scale the infrastructure and the whole chain, and eventually roll it out for you.
Stored distributed graph database
STEFFI is a distributed graph database, connecting data as it’s stored and enabling queries at speed.
Story / Back in 2013, we had the intuition that management of interactions would be key for our clients to develop a better understanding of their business, their customers and therefore to create greater value.
As a response, we incubated STEFFI, a scalable graph database. Similarly to a distributed version of Neo4j and Titan, STEFFI translated complicated traversal operations into large datasets.
We published our first STEFFI paper at IEEE Big Data in 2013 where the in-memory and indexing architecture brought a real innovation.
With this incubated solution, we have been able to develop much more accurate churn detection machine learning algorithms in Telecom and in Insurance while detecting anomalies in industrial processes, not to mention fraud and data governance systems.
What’s in it for you? We can implement graph mining in your system, advise you on what is at stake and analyse graphs rapidly.
Distributed Data Processing System
AROM is a distributed data processing framework based on Data Flow Graphs, designed to address Big Data problems.
Story / Back in 2010, we had the intuition that value creation would go through data mining. One thing was sure, companies that would develop the capacity to process large volumes of data would gain great competitive advantages. AROM. was crafted from this observation.
As Spark, our solution loaded big data, did computations on it in a distributed way, and then stored it.
In June 2011, we published a paper dedicated to our work on AROM followed by another one presented at IEEE CloudCom.
With this incubated solution, we were able to support our clients with solid and sustainable architectures. We designed the first data lakes for IoT platforms, the first data processing platforms for telecom operators, machine learning libraries for fully distributed marketing, and the first data hubs in banking and finance.
What’s in it for you? We know what lies behind a distributed Distributed Data Processing System, how to design it, how it works and how to set it up for any business.