Machine Learning Engineer

Machine Learning Engineer

Toronto, Ontario, Canada  - Permanent

Job Description

We are seeking a Machine Learning Engineer (MLE) to join our Data Engineering team. This team interacts with a number of data consumers such as Data Science, Marketing, and our Business Analyst team to provide data platform solutions to meet day-to-day needs and their long-term visions.

In this role, you will be focused heavily on enablement work for Data Scientists. This includes working with Product to gather requirements, evaluation of solutions, and building systems to enable common Data Science workflows. In addition to building and maintaining systems, you will focus on creating and enriching data pipelines to supply requested data and adhere to established SLAs within our Data Science team.

- Work with our data science, analytics, marketing teams to understand their data requirements and partner with Product to identify solutions that enable reusable workflows.
- Build frameworks and tools to help our data analysts and data scientists design and build their own data pipelines in a self-service manner. This extends to both open source and vendor managed solutions.
- Be a key hands-on contributor to the design and implementation of our data platform solutions from the infrastructure layer up to the API.
- Model and architect our data in a way that will scale with the increasingly complex ways we’re analyzing it.
- Build robust pipelines that make sure data is where it needs to be, when it needs to be there.
- Stand-up, configure and maintain infrastructure across environments while providing education and support to stakeholders.

Must Have Skills:

Who you are:
- You have a few years of hands-on experience as an Engineer across multiple environments on distributed, polyglot systems using: Java, Scala, Clojure, Python, Go, or C++.
- You can go up and down the stack from deep in the infrastructure, as well as all the way up to client libraries.
- You have some experience working with Data Scientists to transform experimentation and Proof-of-Concepts (PoCs) into fully-produced solutions. This can include both ad-hoc and vendored platform solutions.
- You have some experience building performant data pipelines that supply Data Science models.
- You are able to measure the impact of technical products across multiple domains through experimentation and statistical analysis.
- You have an understanding of object-oriented and/or functional programming patterns and paradigms.
- You have hands-on experience with multiple data platforms and tools, such as: S3, Redshift, Airflow, Spark, Presto, Hive.
- You enjoy providing support for pieces of codebases owned to our stakeholders, including guidance and education on functionality,
- You also know how to work with small teams that move fast, and you are able to achieve maximum results with minimal direction.

You may also have experience with:
- Microservices: Kubernetes or Mesos
- Redis or ElastiCache
- Postgres, MySql, or Oracle experience
- Servers: AWS, GCP and/or Azure experience
- SQL or other data modeling tools


Starting: ASAP

Similar jobs in Toronto:

Similar jobs in other locations: