Developed by: Inetum



Keenaï is a tool that allows monitoring, through a single entry point, the whole security of the Information System. The solution is part of the SIEM’s category (Security Information and Event Management).

The development of Keenaï started in 2009. It is under the responsibility of the Gfi (now known as Intetum) Cybersecurity Business Unit (R&D located in Rennes, France) and has received the French state support and security certification is in progress.

Keenaï main objectives are to:

  • Record continuously the Information System activities
  • Detect internal and external attacks in real-time
  • Identify and Reduce security threats

The figure below presents possible sources and the global functionalities of Keenaï:

Sources and global functionalities


Keenaï is an adaptive solution and its architecture can be defined according to the environment:

  • Full distributed environment:
    n servers (VMs/Docker containers) running in parallel
  • A configuration (resources, number of nodes, …):
    depending on the context (expected EPS, topology, …)

The Keenaï environment is based on a set of modules providing specific processing functions and grouped into several categories:

  • A set of collectors
  • A Central Analysis Cluster:
    • A centralization module / archiving raw logs
    • A centralization module / recording events
    • A central correlation engine
    • An Alerting module / real time notification
  • An Administration Console

Keenaï general architecture.

The main technologies used in Keenaï are:

  • Based on a Big Data stack:
    (Hadoop / Yarn)
  • Apache Kafka:
    distributed data bus (logs, events, alerts, models, information)
  • Elasticsearch:
    indexing and visualization of data in real time
  • Apache Flink:
    distributed processing and analysis
  • Hadoop Distributed File System (HDFS):
    log storage, metric result storage for Machine Learning, …
  • Apache Spark:
    batch processing
  • Tomcat:
    web administration console
  • MySQL:
    database for console data (users, profiles, rules, filters, …)
  • Logstash:
    log standardization

The general workflow of the processing of data (events & configuration) is described in figure below:

General data processing workflow.