From everyday products to unique deals and everything in between: on this retail platform, millions of visitors come together daily to search, compare, and purchase. As usage continues to grow, so does the complexity behind the platform, increasing the need for scalable, reliable, and high-performing data systems to support every interaction.
You have my attention
Behind the scenes, teams work on building and optimising the data platform that powers these experiences. From enabling real-time and batch data processing to ensuring data quality, governance, and accessibility, everything is geared towards unlocking insights and supporting data-driven decisions. In this fast-evolving environment, there is continuous focus on improving performance, scalability, and the way data is made available across the organisation.
Now you have my curiosity
As a Staff Data Engineer within a Data Platform team, you take ownership of building and scaling a modern data and ML platform where data is central to everything. You independently develop and deliver high-quality features, refactor existing data products, and ensure everything meets high standards in reliability, scalability, and performance. You have the freedom and responsibility to make decisions and shape how the platform evolves, while supporting data teams in working more effectively with data.
Working hands-on, you design and maintain batch and streaming data pipelines, storage solutions, and APIs that support complex analytical and machine learning workloads. You are responsible for the self-serve data platform, covering data collection, lake management, orchestration, processing, and distribution. In this role, you combine strong technical expertise with a deep understanding of data management and governance, using modern technologies and cloud-based environments such as Databricks, AWS, Spark, Python, Kafka, and Airflow, continuously improving the platform to unlock new opportunities for insight and innovation.
Which skills do I need?
You bring at least 10 years of hands-on experience in Software Development or Data Engineering, with a strong track record in building and operating data-intensive platforms. You have experience with cloud-native applications in AWS, both real-time and batch-based, and are familiar with modern data platforms like Databricks. You are skilled in building and maintaining Spark applications using Python and PySpark, and have a solid understanding of data quality, governance, and scalable data solutions.
You are comfortable working with orchestration tools such as Airflow, container technologies like Docker and Kubernetes, and infrastructure-as-code tools like Terraform. Strong SQL and data validation skills are a given. You collaborate easily with engineers and data teams, communicate clearly with both technical and non-technical stakeholders, and thrive in a fast-paced environment with a high level of ownership and impact.
What’s in It for You
Let’s make it concrete. You’ll receive a competitive gross annual salary between €77.000 and €110.000, depending on your skills and experience. In addition, you’ll receive an annual bonus, ensuring your performance is rewarded.
You’ll work in a flexible environment with plenty of room for hybrid working, allowing you to structure your work in a way that suits you best. On top of solid secondary benefits, such as travel reimbursement, you’ll have ample opportunities for personal development and career growth. Ready for your next step? Apply now!