Job Scope:
As a Senior Data Engineer, you will be responsible for designing, developing, implementing, and optimizing distributed data processing pipelines that handle large volumes of data. Your focus will be on scalability, low-latency, and fault-tolerance to ensure the reliability and efficiency of our systems. You will collaborate closely with Product Management and Business teams to drive product features and contribute to architectural decisions. Additionally, you will provide valuable business insights by leveraging internal tools, databases, and industry data.
Key Responsibilities:
- Data Processing Pipeline Development: Design, develop, and implement distributed data processing pipelines using technologies like Spark, Python/Scala, Java, SQL, and Hive. Optimize pipelines for scalability, low-latency, and fault-tolerance.
- Collaboration with Product Management and Business: Engage with Product Management and Business stakeholders to understand requirements, set priorities, and deliver product features that align with market scenarios. Drive the agenda for platform development and ensure it stays ahead of market trends.
- Influence Cross-Functional Architecture: Participate in sprint planning and influence cross-functional architecture decisions. Collaborate with other teams to ensure alignment with overall platform goals and objectives.
- Business Insights: Provide valuable business insights by analyzing data using internal tools, databases, and industry data sources. Translate insights into actionable recommendations to drive business growth and efficiency.
Skillset:
- Proven work experience in Spark, Python/Scala, Java, SQL, Hive, and any RDBMS/NoSQL database.
- Demonstrated expertise in writing complex, highly optimized queries across large datasets.
- Hands-on experience with cloud environments such as GCP, Azure, or others.
- Proficiency in Unix/Linux shell scripting or similar programming/scripting languages.
- Understanding of CI/CD frameworks and practices.
- Exposure to tools such as Apache Airflow, Nifi, or similar data orchestration and workflow management tools.
Qualifications:
- Bachelor's or Master's degree in Computer Science, Engineering, or related field.
- 9 years of experience in big data engineering or related roles.
- Strong problem-solving skills and attention to detail.
- Excellent communication and collaboration abilities.
- Ability to thrive in a fast-paced, dynamic environment.