Data Engineer
27/01/23
13000 – 15000 PLN/MONTH
Remote
Algopolis is a start-up developing comprehensive trading and investment strategies
based on artificial intelligence methods and advanced statistical analysis.
Since October 2018, we have been working on AI-based trading strategies. After
very promising initial research results and several months of successful operations
on the U.S. stock market, our team of dedicated data and quantitative traders is
growing and looking for new talent. We offer the opportunity to join a small but
dynamic international team with diverse expertise, working on a project with high
potential and using cutting-edge AI methods. We are located in Warsaw, but our
team members also work from London,New York, Wroclaw and Poznan.
based on artificial intelligence methods and advanced statistical analysis.
Since October 2018, we have been working on AI-based trading strategies. After
very promising initial research results and several months of successful operations
on the U.S. stock market, our team of dedicated data and quantitative traders is
growing and looking for new talent. We offer the opportunity to join a small but
dynamic international team with diverse expertise, working on a project with high
potential and using cutting-edge AI methods. We are located in Warsaw, but our
team members also work from London,New York, Wroclaw and Poznan.
Responsibilities:
- Maintaining and improving data ETL pipelines
- Creating custom data models extracting key financial performance metrics from portfolio data (no need for prior financial knowledge – we will teach everything necessary)
- Creating and maintaining dashboards
- Creating and maintaining Prefect (scheduler) flows for data extractions from our API providers
Requirements:
1st priority:
- at least 2 years of experience as a Data Engineer
- strong programming skills in PYTHON (2+ years of experience on a backend side), familiarity with Python libraries: SQLAlchemy, Pytest, Pydantic, Pandas, Flask, FastApi
- experience in working with data: handling non trivial amounts of data, knowledge of job orchestration like Airflow, Prefect, etc
- Strong knowledge in English
2nd priority:
- experience in multiple forms of Data Storages (relational, analytical or in-memory databases, objects stores, search engines, graph databases) and their interrogation, physical data modelling
- experience with standard UNIX tools
- strong understanding of database design and design patterns, both relational and non-relational ( MySQL, MariaDB, PostgreSQL, MongoDB, InfluxDB)
- Best-practice and implementation for continuous integration/delivery: Unit-testing, CICD Pipelines(Gitlab)
- Agile Software-development using Scrum/Kanban
- Ability to work in an independent, conscientious and solution-oriented way
- experience with tools: Docker, RabbitMQ, Redis, docker-compose, gitlab, git, Metabase
Day to day
Meetings at 9:00 am where we discuss the research currently underway. Open discussion where each team member shows the progression of their research in the form of reports on wandb and we discuss conclusions together. Most of the work consists of:
- Drafting the research process – from asking the question we want to solve, to planning each individual experiment
- Programming custom solutions to let go of specific experiments
- Designing the architecture of neural networks
- Analyzing the experiments
Over 60% of our candidates get invited to an interview with our Clients.
Apply with the form below and we will reach out to you in the next 24h
Apply now!