Job Description
Location: London
Hybrid: 3 days per week in London o0ice
About the Role
Join a mission-critical team within our firm’s cutting-edge platform engineering function, supporting platform for front-office developers (quants and strategists) in a unique hands-on opportunity within a focused, high-impact team.
You’ll be at the forefront of data and cloud-native engineering, working with modern technologies across AWS, On-premise kubernetes, Python, and data pipelines, while engaging directly with key internal platforms and front-office business developers. If you are passionate about solving hard technical problems, staying current with trends like AI Engineering, and want to make a difference in a globally respected financial institution, this role is for you.
The successful AWS Data Engineer candidate will have the chance to make a significant impact in designing the platform and working on cutting-edge technologies like Databricks and Snowflake in the heart of a leading global Investment Banks’ front-office. This is a rare greenfield role that offers the opportunity to solve the ultimate data pipeline challenge faced by all banks, working closely with various businesses and gaining an overview of many different sectors.
What We’re Looking For
• 5 + years, hands-on experience in AWS data engineering technologies, including Glue, PySpark, Athena, Iceberg, Databricks, Lake Formation, and other standard data engineering tools.
• Strong experience engineering in a front-office/capital markets environment.
• Previous experience in implementing best practices for data engineering, including data governance, data quality, and data security.
• Proficiency in data processing and analysis using Python and SQL.
• Experience with data governance, data quality, and data security best practices.
• Strong knowledge of market data and its applications.
• Understanding of Generative AI concepts, along with hands-on experience in developing and deploying AI applications in real-world environments.
Nice to Have
• Experience with other data engineering tools and technologies.
• Knowledge of Machine Learning / AI and data science concepts.
Accountabilities
• To build and maintain the systems that collect, store, process, and analyse data, such as data pipelines, data warehouses and data lakes to ensure that all data is accurate, accessible, and secure.
• Build and maintenance of data architectures pipelines that enable the transfer and processing of durable, complete and consistent data.
• Design and implementation of data warehoused and data lakes that manage the appropriate data volumes and velocity and adhere to the required security measures.
• Development of processing and analysis algorithms fit for the intended data complexity and volumes.
• Collaboration with data scientist to build and deploy machine learning models.