Data Delivery Specialist - MACRO - (SHaPE)

McKinsey & Company
Apply Now
  • Experience

    2-4 year
  • Salary

    $110,000-$120,000
  • Location

    Gurugram, Haryana, India
  • Job Function

    Data Delivery Specialist
  • Industry

    Information Technology
  • Qualification

    Degree in Computer Engineering, Degree in Computer Science, Degree in Data Science, Degree in Mathematics, Degree in Statistics

Key Skills

Python Programming, PySpark, SQL, Apache Airflow, Databricks, Kedro, AWS, Google Cloud Platform (GCP), Azure, Scala Programming, RDBMS , PostgreSQL, Amazon RedShift, Teradata, Snowflake, NoSQL, , Apache Cassandra, Neo4J, Titan, MongoDB

Job Description

Who You'll Work With

You will be based in Gurugram, India, as part of McKinsey’s Medical and Admin Cost and Results Optimization (MACRO) domain within the Social, Healthcare, and Public Entities (SHaPE) practice. You'll collaborate with our healthcare analytics teams in technology hubs across the US and in India. 

This team uses healthcare data (payer, provider, 3rd party data, etc.), advanced analytics (combination of descriptive & predictive analytics), advanced technologies (automation, digital, AI) and core consulting skills to answer some of the most pressing questions our healthcare payor clients have. We leverage a big-data analytics platform that includes 100s of terabytes of integrated claim, encounter, clinical, consumer, and other data. The team consists of 100 dedicated experts and 250 affiliated professionals with industry, advanced analytics, statistics, clinical, and software expertise, all of whom work to design, deliver, and operate advanced analytic tools to help healthcare clients around the world.

McKinsey SHaPE fosters innovation driven by advanced analytics, user experience design thinking, predictive forecasting to develop new products/services and integrating them into our client work. It is helping to shift our model towards asset-based consulting and is a foundation for –and expands our investment in –our entrepreneurial culture. Through innovative software as service solutions, strategic acquisitions and a vibrant ecosystem of alliances, we are redefining what it means to work with McKinsey.

As one of the fastest-growing parts of our firm, SHaPE has more than 1,500 dedicated professionals (including more than 800 analysts and data scientists) and we’re hiring more mathematicians, data scientists, designers, software engineers, product managers, client development managers and general managers.

What You'll Do

You will be a trusted technical expert within the data engineering group, partnering with cross-functional team members to design, develop and maintain software applications and data services. 

In this role, you will bring forward your development and big data expertise and join a team that is tasked with building out analytical products that help our clients transform healthcare across the world.

You will be tasked with helping modernize and re-invent our technology stack to enable our products to be market leading solutions. By leveraging current engineering frameworks and best practices, you will design and develop core data frameworks that will be leveraged by data engineers and team members. You will be a key contributor to the technology roadmap of the group, bringing forward not just the technology stack but also contributing to improvements in our software development processes as well as engineering skills.

You will be an active learner, identifying and evaluating new tools and technologies to meet requirements. You will contribute code and participate in code reviews. You will break down user stories into technical tasks and requirements and you will also identify, communicate and escalate risks when appropriate.

Qualifications

  • Bachelor's degree in computer science or equivalent area; advanced degree is a plus
  • 5+ years implementing data-intensive applications using Python, Scala, Java
  • 4+ years using distributed data processing engines such as Spark, Hive and Hadoop based tools
  • 4+ years experience in data workflow automation using ETL / ELT tools such as Talend, Kedro, NiFi, Shell scripts, etc.
  • Strong experience in SQL and data optimization techniques is a must
  • Experience in building Natural Language Processing (NLP) based data solutions
  • Experience in building large data sets to support data science development
  • Comprehensive knowledge of continuous delivery practices and DevOps tools such as Jenkins, Git, etc.
  • Experience in Agile development and delivery methodology
  • Ability to understand complex systems and solve challenging analytical problems
  • Comfort with ambiguity and rapid changes common in early-stage product development
  • Healthcare domain knowledge not required, but interest in healthcare appreciated
  • Travel expectation: less than 10%

Company Info.

McKinsey & Company

McKinsey & Company is a management consulting firm, founded in 1926 by University of Chicago professor James O. McKinsey, that advises on strategic management to corporations, governments, and other organizations. McKinsey is the oldest and largest of the "Big Three" management consultancies (MBB), the world's three largest strategy consulting firms by revenue. It has consistently been recognized by Vault as the most prestigious consulting firm.

  • Industry
    Financial services,Management Consulting
  • No. of Employees
    33,104
  • Location
    55 East 52nd Street, New York, NY, USA
  • Website
  • Jobs Posted

Similar Jobs View More