Toronto, Ontario - Permanent
Our client is an R&D and Innovation lab located in downtown Toronto, that are responsible for transmitting billions of bytes of electronic and secure data at dizzying speeds. Their goal is to make commerce more accessible and convenient, and in 2017, they launched their first foray app into Canada/North America, which helps users organize and pay bills in one simple location. Not only does the app send you reminders so that you never miss a payment, but it also gives you 3% cash back on popular retail brand gift cards! They support their parent company, a mobile payments and financial services company that currently serves 300 million customers!!
Working on a small diverse, and tight-knit team that is committed to working for the end consumer, they leverage their expertise in technology to build a lasting, secure, and efficient solution. Their creative and incredibly talented engineers work to provide customized and confidential experiences for their consumers and users. They encourage their employees to take charge of their innovative ideas and execute them with passion and vigour.
Responsibilities: Work directly with Product Team, Machine Learning Engineers, and Platform Engineering Team to create reusable experimental and production data pipelines to be used in 1,000s of use cases.
Build custom data pipelines on top of our core Python/Hadoop platform.
Develop efficient structures and schema for the data in storage and transit.
Design, build, and test big data queries for data views and feeds.
Keep the high volume data whole, safe, and flowing using Spark Streaming, Kafka, etc.,
Passion for emerging technology in data processing and storage.
Adopt problem solving as a way of life always go to root cause!
Must Have Skills:
3+ years of database experience is essential.
Degree in Computer Engineering or Computer Science or 3-5 years equivalent experience.
Proficient in Java and/or Scala and Python.
Experience implementing data platform components such as RESTful APIs, Pub/Sub Systems, and Database Clients.
AWS experience is a plus.
Working knowledge of application of toolsets in the Apache Hadoop ecosystem.
Assembled and deployed JVM applications.
Familiarity with reactive platforms and micro-services.
Experience with R, Apache Spark, or Akka a plus.
Expertise on high volume data ingest and streaming platforms (like Spark, Hive, Samza, Kafka, etc).
Detailed-oriented and love to double and triple check your work
Strong analytical skills.