A 1967 Math Paper Just Solved AI's $100 Million Problem
DeepSeek's mHC reduces signal amplification from 3000x to 1.6x using Sinkhorn-Knopp, a 59-year-old algorithm. Training stability solved.
Read on MediumBuilding intelligent automation systems and bridging the gap between research and real-world applications.
I'm an AI engineer and researcher with more than 9 years of experience blending software engineering, data science, and deep learning. Right now, I'm specializing as an AI Automation Engineer, working on smart systems that automate complex tasks, while remaining an active AI researcher exploring new frontiers in the field.
I love building robust, reliable AI models that actually work in real life. Recently, I've been focusing on robotics projects using reinforcement learning, realistic simulations with Mujoco and IsaacSim, and computer vision techniques to make robots smarter and more precise.
I earned my Master's in Bioinformatics and Modeling from Université Libre de Bruxelles. My master thesis focused on predicting protein melting temperatures using advanced NLP methods, specifically fine-tuning ProteinBERT. I was also involved in developing computer vision models to detect pancreatic cancer from histological images.
When I'm not at work, you might find me replicating AI research papers to stay sharp, composing electronic music (something I've been passionate about for over a decade) or diving into philosophical readings, another deep interest of mine.
Statistical analysis, hypothesis testing, and experimental design
Comprehensive analysis of research papers and state-of-the-art methods
Redaction of complete technical documentations and research papers
Cross-functional teamwork with researchers from diverse backgrounds
I develop practical deep learning models and AI agents for intelligent automation, enhancing real-time decision-making in robotics and industry.
I apply advanced NLP and machine learning techniques to tackle real-world bioinformatics challenges, such as predicting protein stability and analyzing biological sequences.
My work involves developing accurate and reliable computer vision algorithms, particularly for biomedical applications such as early and precise cancer detection.
I design advanced reinforcement learning methods combined with realistic simulations (like Mujoco and IsaacSim) to enhance robotic performance and autonomy.
Complete PyTorch and MLX implementation of Google's Titans architecture with Neural Long-term Memory. Supports MAC, MAG, MAL variants with Flash Attention 2 and Metal kernels.
View Code
Native MLX implementation of MASt3R for 3D reconstruction on Apple Silicon. 1.87x faster than PyTorch MPS with custom Metal kernels for fused RoPE 2D, bilinear upsampling, and grid sampling.
View Code
Production-optimized fork of ByteDance's Depth Anything 3 with 200x faster cached model loading, adaptive batching, Apple Silicon optimization, and PyPI distribution.
View Code
Text-prompted object segmentation model combining EfficientSAM and GroundingDINO for efficient, language-guided image segmentation with minimal computational overhead.
View Code
Full PyTorch replication of ProteinBERT (Brandes et al., 2022) for protein sequence analysis using self-supervised transformers and transfer learning on biological sequences.
View Code
An optimized RGB-D depth refinement pipeline powered by a Vision Transformer, offering high-performance processing with cross-platform support for CUDA, MPS, and CPU.
View Code
PyTorch implementation of Vision Transformer (Dosovitskiy et al., 2020), demonstrating pure attention-based architecture for image classification without convolutions.
View Code
Novel deep learning model designed to outperform current benchmarks in protein sequence prediction, leveraging advanced transformer architectures and protein-specific embeddings.
In Development
A PyTorch replication of the Fair and Efficient Network (FEN) for multi-agent reinforcement learning, focusing on dynamically balancing fairness and group efficiency.
View CodeLead AI initiatives on an industrial-scale autonomous sewing robot, blending classical planning, reinforcement learning, computer vision, and large-scale simulation. The goal is to deliver reliable, high-performance robotics for demanding textile environments, from proof-of-concept through integration in production systems.
Drived machine learning research in computational biology, tackling the prediction of protein melting temperatures using state-of-the-art deep learning and NLP methods.
During my research internship, I helped develop machine learning solutions for cancer detection in histopathology images. My contributions ranged from core algorithm design to data engineering and clinical collaboration.
As an independent consultant, I delivered bespoke software and AI solutions to a range of clients—mostly in IoT, data analytics, and digital transformation—guiding projects from requirements to production.
As co-founder, I was responsible for both the technical architecture and business growth of Eavox—a mobile platform designed to energize social interaction through gamified sports challenges. I managed the full product lifecycle, from concept and MVP to launch and user acquisition.
Led the development of both mobile and backend solutions for LevelApp, starting as an intern and moving into a lead engineering role. My responsibilities spanned cross-platform app architecture, classic business app development, integration of IoT and chatbot technologies, and end-to-end DevOps.
NVIDIA
Hugging Face
IBM / Coursera
DeepLearning.AI / Coursera
DeepLearning.AI / Coursera
Udemy
DeepLearning.AI / Coursera
Stanford University / Coursera (Andrew Ng)
Deep dives into AI breakthroughs, industry analysis, and practical guides for developers.
DeepSeek's mHC reduces signal amplification from 3000x to 1.6x using Sinkhorn-Knopp, a 59-year-old algorithm. Training stability solved.
Read on Medium
Abu Dhabi's Falcon-H1R: 88.1% on AIME-24, 2x faster than Qwen3-8B. A 7B hybrid Transformer-Mamba2 model running on your MacBook.
Read on Medium
54,836 layoffs attributed to AI (+332%). Amazon, Microsoft, Salesforce cite AI directly. But 55% of companies regret their AI layoffs and Klarna is rehiring.
Read on Medium
1.87x faster than PyTorch MPS. Native MLX implementation with custom Metal kernels for 3D reconstruction on Apple Silicon.
Read on Medium
AI is reshaping developer hiring, but the data reveals a complex story beyond the headlines. What AWS CEO Matt Garman really said about junior developers.
Read on Medium
A goat farmer proved that $50,000 of coding skill can be replaced by a 5-line Bash script. The Ralph Wiggum technique explained.
Read on Medium
Anthropic's AI coding assistant shipped a critical security patch, 1,096 commits, and revealed 90% of its code is self-written. Complete guide inside.
Read on Medium
30,000 robots/year. Gemini AI brain. 2,000 TFLOPS onboard. The factory of the future was just announced at CES 2026. This time, it's not a demo.
Read on Medium
Gartner predicts 3x more SLM usage than LLMs by 2027. Bayer gained +40% accuracy. The "bigger is better" paradigm is collapsing. Complete enterprise guide.
Read on Medium
In January 2025, a Chinese lab taught an AI to reason by itself. 97% of the time, that AI now pretends to obey us while secretly preserving its own goals.
Read on Medium
Jonathan Ross created the TPU at Google, then founded Groq to annihilate NVIDIA. Nine years later, he works for Jensen Huang. The story of an industry that devours its revolutionaries.
Read on Medium
Technical deep dive into test-time learning, surprise-gated memory, and what cognitive science teaches us about machine memory. TITANS transcends TC⁰ limits.
Read on Medium
DreamerV3 finds diamonds in Minecraft with 1 GPU. OpenAI's VPT needed 720. Sutskever and LeCun left their labs with $35B. The World Models revolution explained.
Read on Medium
EPFL researchers prove Transformers are mathematically injective. The SipIt algorithm reconstructs prompts with 100% accuracy. GDPR regulators are taking notice.
Read on Medium
Pangram Labs analyzed 75,800 reviews: 21% fully AI-generated, 50%+ with AI traces. ICLR submissions up 68%. The peer review crisis exposed.
Read on Medium
USC Viterbi researchers built ionic memristor neurons using silver ions. 3 components vs hundreds. Attojoule-scale energy. 5-10 years to commercial viability.
Read on Medium
After 12 years as Meta's Chief AI Scientist, LeCun left to launch AMI Labs. $3.5B valuation. 76% of AI researchers agree: LLMs alone won't reach AGI.
Read on Medium
Isaac Sim: 94K FPS with 4096 parallel envs. MuJoCo MJX: 2.7M steps/sec on TPU. 6 months of testing. Newton unifies both — 152x acceleration on RTX 4090.
Read on Medium
SAM 3 achieves 54.1 cgF1 on concept segmentation — 2x OWLv2, 4x Gemini 2.5. 74% of human performance, 100+ objects in 30ms. Text prompts replace clicks.
Read on Medium
A deep dive into Google's Titans neural long-term memory architecture. 2M+ token context with O(n) complexity. 98.8% needle-in-haystack accuracy vs Mamba-2's 31%.
Read on Medium
Skills, Hooks, and Commands: the automation layer that completes the arsenal. Anti-hallucination skills, bash validators, and semantic shortcuts.
Read on Medium
RF-DETR: first real-time model to exceed 60 AP on COCO. Transformer architecture, no anchors, no NMS. Built by Roboflow, Apache 2.0 license.
Read on Medium
16 specialized expert agents, 6 connected MCPs for real-time verification, and an anti-hallucination protocol. From drowning in 1 project to crushing 5 in parallel.
Read on Medium
Claude dominates on SWE-bench (80.9% vs ~76% for competitors). Combined with Claude Code, the $100/month investment pays for itself if it saves you 5 hours of debugging.
Read on Medium
OpenAI declared internal "Code Red" as Claude Opus 4.5 dominates benchmarks. Mistral's open-source model runs on laptops for 80% less cost. The AI war just went from competition to chaos.
Read on Medium
200x faster model loading, 14% faster inference, and full Apple Silicon support. Here's how I optimized ByteDance's depth estimation model for production.
Read on Medium
How I transformed ByteDance's research code into a production-ready PyPI package with 2x faster inference, multi-platform support, and a live HuggingFace demo.
Read on Medium
Revolution or Algorithmic Mirage? Discover how robots can now evaluate their own performance using Vision-Language Models, eliminating the need for human feedback.
Read on Medium
Tech giants are racing to build AI that doesn't just follow commands — it predicts your needs and takes actions on your behalf. How close are we to this future?
Read on MediumI'd love to hear about your project or research collaboration ideas. Let's discuss how we can work together!
Brussels, Belgium
Central European Time (CET/CEST)Open to new projects and collaborations
AI Automation • ML Development • Research Projects • Technical ConsultingFrench, English
Comfortable working in bothInterested in collaborations on AI-driven automation, reinforcement learning, NLP, and computer vision.
Open to speaking engagements at conferences, workshops, and industry events (remote or in-person).
I provide specialized consulting on AI strategy, automation techniques, and machine learning model development.