We are looking for a skilled and motivated Data Engineer with 3 to 4 years of hands-on experience in building and maintaining data pipelines using Azure data services, Python, and SQL. You will play a key role in designing, implementing, and optimizing scalable data workflows that support analytics and reporting initiatives across the organization.
Responsibilities
- Design, develop, and maintain robust data pipelines using Azure Data Factory and Databricks.
- Implement ETL/ELT processes for structured and semi-structured data.
- Develop and optimize SQL queries for data transformation, integration, and analysis.
- Work closely with data analysts and business stakeholders to understand data requirements.
- Collaborate with data scientists and BI developers to enable advanced analytics and reporting solutions.
- Ensure data quality, security, and compliance across data solutions.
- Maintain comprehensive documentation and participate in code reviews.
Required Skills - 3–4 years of experience in Data Engineering or related roles.
- Proficiency in Python for data manipulation, scripting, and automation.
- Strong command over SQL (including complex joins, window functions, CTEs, etc.).
- Hands-on experience with Azure Data Factory, Azure Databricks, Azure Data Lake Storage (ADLS), and Azure SQL Database.
- Familiarity with Delta Lake, Azure Key Vault, and Azure Monitor for secure and reliable data workflows.
- Experience working with Azure Blob Storage, Azure Logic Apps, or Azure Functions for data orchestration and automation.
- Understanding of data lake architecture and best practices for building scalable data pipelines on Azure.