Data Engineer at Zopa
London, GB
Zopa’s data-driven culture has played a major role in the fantastic growth we have been experiencing. We want to continue strengthening this culture, empowering people with tools, knowledge, and easy-to-use data.
 
We need an experienced Senior Data Engineer to lead the process of upgrading, building, and optimizing our new data warehouse/lake combo. The right candidate will combine software development and project management skills, and they will be required to set and drive the roadmap of the project.
 
Your talent and drive will help create a scalable, robust, and efficient data-warehousing solution powering the analytical needs of the whole company.
 
For more clarity on our values and mode of operation, see our tech blog, and especially our posts on PredictorData Democratization, and the cross-functional tribal model we use.

What you'll be doing:

    • Liaise with data analysts, data scientists and decision makers such as product owners and business analysts to gather requirements
    • Build scalable, reliable, maintainable, secure and high-performance data warehouse (Redshift) platform and data lake (S3) combo powering the analytical needs of the whole company
    • Lead the design, development and implementation of various data pipelines using AWS based big data technologies
    • Data Modelling in Redshift and Hive/Glue to ensure data integrity, ease of data access and query efficiency

Tech stack:

    • Data Pipeline: AWS S3, Lambda, Glue, Redshift and Kinesis/Kafka
    • ETL: Job orchestration using Airflow or AWS Glue or Custom ETL/ELT
    • SQL: Postgres/SQL Server/Redshift
    • Python: Pandas, NumPy and boto3 AWS SDK
    • Big Data Technologies: Spark, Athena/Presto, Glue/Hive Data Catalo

Essential skills:

    • 4+ years of experience in Data Engineering
    • Strong understanding of Python Data Structures and algorithms
    • Experience with data pipelines and ETL/ELT process using Python 
    • Experience with data modelling and data architecture optimisation
    • Experience with Postgres, SQL Server or Redshift databases 
    • Proficient in writing complex SQL statements
    • Experience with different AWS services and Spark
    • Understanding of software development life cycle and practices such as coding standards, code reviews and version control using Git
    • Experience working with CI/CD

Bonus points for:

    • Dimensional data model using Kimball (star/snow flake)
    • Terraform or AWS CloudFormation
    • Tableau
    • Agile

Behaviours:

    • Strong passion for project management, execution and getting things done 
    • Ability to communicate with both technical and non-technical stakeholders
    • Excellent interpersonal, relationship building and influencing skills
    • Passionate about learning new, cutting-edge technologies
We are committed to equality of opportunity for all staff and applications from individuals are encouraged regardless of age, disability, sex, gender, sexual orientation, pregnancy and maternity, race, religion or belief and marriage and civil partnerships.