Job Description
- Experience in working on Spark framework, good understanding of core concepts, optimizations, and best practices
- Hands-on experience in writing code in PySpark,
- Understanding of design principles and OOPS
- Experience in writing complex queries to derive business critical insights
- Understanding of Data Lake vs Data Warehousing concepts
- Knowledge on Machine learning would be an added advantage
- Experience in NoSQL Technologies – MongoDB, Dynamo DB
Department: Cloud & Data Engineering
Open Positions: 4
Skills Required: Bigdata, PySpark, Spark, OOPS
Role:
- Design and implement solutions for problems arising out of large-scale data processing
- Attend/drive various architectural, design and status calls with multiple stakeholders
- Ensure end-to-end ownership of all tasks being aligned
- Design, build & maintain efficient, reusable & reliable code
- Test implementation, troubleshoot & correct problems
- Capable of working as an individual contributor and within team too
- Ensure high quality software development with complete documentation and traceability
- Fulfill organizational responsibilities (sharing knowledge & experience with other teams/ groups)
- Conduct technical training(s)/session(s), write whitepapers/case studies/blogs etc.
Education/Qualification: Bachelors
Years Of Exp: 6 to 11 Years
Company Info.
Impetus
Impetus is focused on creating big business impact through big data solutions for Fortune 1000 enterprises. The company offers a unique mix of software products, consulting services, data science capabilities, and technology expertise. Leverage our big data solutions for automated warehouse transformation (Impetus Workload Transformation Solution), real-time streaming and batch analytics (StreamAnalytix,) and rapid application development on Spar