Job Summary:
We are seeking a skilled Data Warehouse Engineer to design, develop, and maintain our data warehouse solutions. The ideal candidate will have expertise in ETL processes, data modeling, and database technologies, ensuring seamless data integration, storage, and retrieval. This role requires strong problem-solving skills and experience working with large-scale data systems.
Key Responsibilities:
1. Data Warehouse Development & Maintenance
- Design, develop, and maintain data warehouse architecture to support business intelligence and analytics needs.
- Implement ETL (Extract, Transform, Load) processes to ensure efficient data flow from multiple sources.
- Optimize database performance, ensuring high availability and scalability.
2. Data Integration & ETL Development
- Develop and maintain ETL pipelines using tools like Apache NiFi, Informatica, Talend, or AWS Glue.
- Ensure data accuracy, consistency, and security during the transformation process.
- Work with various structured and unstructured data sources, including APIs and third-party databases.
3. Data Modeling & Optimization
- Design and maintain star and snowflake schema data models for reporting and analytics.
- Optimize SQL queries and database structures for improved performance.
- Work with big data technologies (e.g., Hadoop, Spark) when necessary.
4. Collaboration & Reporting
- Collaborate with data analysts, data scientists, and business stakeholders to understand data requirements.
- Support business intelligence teams by enabling efficient data reporting and dashboarding.
- Ensure compliance with data governance and security policies.
Required Skills & Qualifications:
✅ Bachelor’s degree in Computer Science, Data Engineering, or a related field.
✅ 3-7 years of experience in Data Warehousing, ETL development, and database management.
✅ Proficiency in SQL, Python, or Scala for data manipulation.
✅ Experience with data warehouse technologies (e.g., Snowflake, Redshift, BigQuery, Teradata).
✅ Hands-on experience with ETL tools like Informatica, Talend, or AWS Glue.
✅ Familiarity with cloud platforms (AWS, Azure, GCP) and data lake concepts.
✅ Knowledge of big data processing frameworks (Hadoop, Spark) is a plus.
✅ Strong analytical and problem-solving skills.
Preferred Qualifications:
⭐ Experience with CI/CD pipelines for data engineering.
⭐ Knowledge of data security and compliance best practices.
⭐ Familiarity with data visualization tools like Power BI or Tableau.