Data Engineer

Data Engineer

Remote/Telecommute JobREMOTE / Montreal, Quebec, Canada  - Permanent
This job allows you to work remotely 

Job Description

Our client is a rapidly growing global company and the innovators of Service-Driven Supply Chain Planning software. In a world that never follows the rules, organizations have to be ready for anything--from the challenges of multi-echelon inventory optimization to the endless surprises of sporadic demand. To do this, they have to predict more behaviors, protect against surprises, and perform more efficiently.

For more than two decades, our client has been helping companies around the world overcome demand volatility and supply chain complexity so they can deliver outstanding service to their customers and bottom line. Their solutions have been recognized by customers globally and analyst firms, such as Gartner, for their ability to support service and inventory trade-offs, while dramatically improving planner productivity.

Their software has been successfully deployed worldwide in more than 44 countries, with one of the highest customer retention rates in our industry.

About the role:

We are looking for a Data Engineer. Someone who has strong data modelling, scripting and pipelining skills. The Data Engineer will work with large complex data sets, connect multiple systems in the most secure and efficient way.

In this role, they will be responsible for designing, documenting, developing, unit testing, and supporting data pipelines between internal and external applications while leveraging state of the art technologies and tools.

You will:

• Design reliable and scalable ETL/ELT pipelines right from ingestion of the data to the delivery of the end product
• Build custom solutions to automate workflows and supporting architecture
• Take ownership of existing products and pipelines, making sure they are delivered on time
• Implement upgrades and optimizations to existing processes, including migration of the code to newer versions of libraries/packages/infrastructure/etc
• Communicate with different stakeholders to set expectations and figure out priorities
• Ensure quality, reliability and uptime for critical automated processes, including helping the data science team diagnose and resolve issues in the pipeline and in the data
• Design environments within the platform to house the data in a secure manner and enabling clients and internal teams to access it appropriately

Must Have Skills:

•A Bachelor's Degree in Engineering, Information Technology, or similar field
•At least 3 years of commercial experience programming in languages typically employed in Data Engineering such a Python, Java, Scala, GoLang
•At least 2 years experience working with distributed systems such as Spark in cloud computing platforms such as AWS, GCP and Azure leveraging ETL tools such as AWS Glue, Azure Data Factory and more
•Data Lakes and Data Lake Houses (e.g. DataBricks,Dremio, etc)
•Experience with SaaS highly distributed systems, distributed cloud storage, event queues and datastreaming using Apache Kafka or similar
•Good understanding of Data Engineering, SQL/NoSQL databases and database design, distributed systems and/or information retrieval
•Ability to plan and collect requirements and interact with analysts and data science teams
• Experience in working in an Agile environment at a software company

Nice to Have Skills:

• Experience with Graph Databases and Knowledge Graphs
• Knowledge of Apache Airflow
• Familiarity with common Azure services
• Experience writing high-performance queries in SQL and unstructured or graph query languages


Starting: ASAP

Similar jobs in Montreal: