Location: Mississauga, ON (Hybrid - 3 days in office)
We are seeking a highly skilled and experienced Azure Data Engineer to join our dynamic team. This is a hybrid role requiring 3 days per week in our Mississauga, ON . The ideal candidate will have a strong background in Python, PySpark, SQL, and data warehousing for enterprise-level systems, and be comfortable collaborating with business users and analysts.
Responsibilities:
As an Azure Data Engineer, you will be responsible for:
- Building and optimizing robust data pipelines for efficient data ingestion, transformation, and loading (ETL) from diverse sources, ensuring data quality and integrity.
- Designing, developing, and deploying Spark programs in a Databricks environment to process and analyze large volumes of data.
- Working with Delta Lake, data warehousing (DWH), data integration, and data modeling concepts.
- Developing proficient programs in Python and SQL.
- Designing and developing dimensional data models for data warehouses.
- Working with event-based/streaming technologies for data ingestion and processing.
- Optimizing Databricks jobs for performance and scalability to handle big data workloads.
- Monitoring and troubleshooting Databricks jobs, identifying and resolving issues or bottlenecks.
- Implementing best practices for data management, security, and governance within the Databricks environment.
- Designing and developing enterprise data warehouse solutions.
- Proficiently writing SQL queries and programming, including stored procedures and reverse-engineering existing processes.
- Performing code reviews to ensure alignment with requirements, optimal execution patterns, and adherence to established standards.
Must-Have Skills:
- Snowflake
- SQL
- Python
- Azure Data Factory (ADF)
- Azure Databricks
- PySpark
- Data Warehouse Concepts
Qualifications:
- 9+ years of Python coding experience.
- 5+ years of SQL Server-based development with large datasets.
- 5+ years of experience developing and deploying ETL pipelines using Databricks PySpark.
- Experience with any cloud data warehouse such as Synapse, Big Query, Redshift, or Snowflake.
- Strong experience in Data Warehousing concepts: OLTP, OLAP, Dimensions, Facts, and Data Modeling.
- Previous experience leading an enterprise-wide Cloud Data Platform migration with strong architectural and design skills.
- Experience with Cloud-based data architectures, messaging, and analytics.
- Cloud certification(s) (e.g., Azure Data Engineer Associate).
Nice to Have:
- Experience with Apache Airflow.