Senior Data Engineer
Apply now!
Candidate data
Data Engineer
We are a leader in AI-powered enterprise operations, delivering cutting-edge digital solutions and consulting services that transform businesses and drive measurable value. With deep domain expertise in private capital markets and beyond, we specialize in optimizing workflows, unlocking efficiency, and enabling data-driven decision-making. Our technology ecosystem—including our Core Platform, Olympus Software, and Pantheon Solutions Suite—empowers high-growth companies and private equity-backed organizations to realize strategic advantage through data and automation.
As a Data Engineer, you’ll play a vital role in shaping and scaling our data infrastructure. You’ll design and maintain robust ETL pipelines, transform data for business intelligence and AI applications, and ensure our platforms deliver high performance and reliability at scale.
Responsibilities
-
Design, build, and deploy Python-based ETL pipelines using tools like Prefect and Airflow.
-
Model and implement dimensional and denormalized schemas for optimized performance and AI readiness.
-
Develop and manage cloud-native data workflows using event-based and streaming technologies.
-
Write efficient and maintainable SQL and stored procedures to support complex queries and data transformations.
-
Engineer data pipelines that ingest and process structured, semi-structured, and unstructured data from diverse sources.
-
Ensure data quality, security, and governance across all pipelines using best practices with tools like DBT and Pandas.
-
Monitor, debug, and optimize data flows to support scalability, reliability, and speed.
-
Support CI/CD processes, code reviews, and infrastructure automation.
-
Collaborate across teams to ensure data accessibility and alignment with business needs.
Requirements
-
3+ years of Python development experience, including libraries like Pandas.
-
5+ years of experience writing complex SQL queries in enterprise RDBMS environments.
-
5+ years building and deploying ETL pipelines using Airflow, Prefect, or similar orchestration tools.
-
Experience with data warehouse platforms such as Amazon Redshift, RDS, or Snowflake.
-
Strong understanding of data warehouse design principles: OLTP, OLAP, dimensions, facts.
-
Familiarity with cloud-based architectures, messaging systems, and analytics workflows.
-
Experience with data modeling and designing AI-ready schemas.
Tech Stack
-
Languages & Libraries: Python, Pandas, SQL, DBT
-
Orchestration: Prefect, Airflow
-
Data Warehouses: Redshift, Snowflake, RDS
-
ETL & Data Processing: DBT, Pandas, SQL stored procedures
-
Cloud & DevOps: AWS (Lambda, Step Functions), Docker, CI/CD
-
Optional/Nice-to-Have: Kubernetes, PySpark, Databricks, Cloud Certifications, Data Partitioning
Why Join Us?
-
Be part of a forward-thinking team driving AI-powered solutions at scale.
-
Enjoy flexible work arrangements, including hybrid in Warsaw.
-
Take advantage of generous PTO policies and a comprehensive benefits package.
-
Join a dynamic and collaborative work culture that values innovation and continuous learning.
-
Work on mission-critical projects with real business impact, supporting high-growth clients across the globe.
At our company, you won’t just build pipelines—you’ll help architect the future of intelligent operations.
Over 60% of our candidates get invited to an interview with our Clients.
Apply with the form below and we will reach out to you in the next 24h