Senior Data Engineer

G2 Crowd
Apply Now

Job Description

G2 is looking for a Senior Data Engineer, you'll play a pivotal role in driving the design, development, and optimization of complex data pipelines and architectures for the G2 data platform. Leveraging advanced ETL expertise and cloud-based solutions within AWS and Snowflake environments, you'll lead critical data initiatives, mentor team members, ensure best practices in data engineering, optimize data pipelines, and maintain high data quality and reliability. This role is located in Bengaluru.

In This Role, You Will:

Data infrastructure and processing:

  • Lead the design, development, and optimisation of sophisticated data pipelines, ensuring seamless data extraction, transformation, and loading into G2’s data warehouse from diverse sources.
  • Lead engagements with the internal stakeholders to understand their data needs and design solutions ensuring data quality and governance.
  • Design the right pipeline architecture to handle data and support various use cases, including analytical reporting and machine learning.
  • Identify, design, and implement process improvements to optimize our data delivery, and re-design infrastructure and pipelines to achieve greater scalability.
  • Help drive the creation of monitoring, alerting, and reporting on the reliability of data pipelines and data processing systems. 
  • Develop and manage database schemas and models that support efficient data storage, retrieval, and analysis.
  • Stay up to date with the latest data technologies and trends and evaluate their applicability to G2.

Data quality assurance and governance :

  • Work closely with key stakeholders and SMEs to define business rules that determine governance and data quality.  
  • Implement robust measures for data quality, validation, and cleansing to ensure accuracy, completeness, and compliance with the data governance standards.
  • Design efficient, scalable processes to acquire, manipulate, and store data and ensure adherence to them.

Leadership and Mentoring :

  • Provide technical leadership, guidance, and mentorship to junior team members, fostering a culture of excellence and continuous learning in data engineering practices.
  • Collaborate closely with cross-functional teams to align data solutions with organizational goals and ensure successful integration into broader projects.
  • Share insights, contribute to best practice repositories, and drive innovation by evaluating and implementing emerging technologies to enhance data engineering capabilities.

Minimum Qualifications:

We realize applying for jobs can feel daunting at times. Even if you don’t check all the boxes in the job description, we encourage you to apply anyway.

  • 6+ years of experience in implementing enterprise data solutions.
  • 3+ years of experience using ETL/data pipeline tools (dbt, Airflow, Airbyte, Glue, Matillion, Stitch, etc.).
  • 2+ years of hands-on experience in data modeling, optimization and database architecture.
  • Extensive experience in writing, debugging, and tuning SQL queries.
  • Strong programming experience in Python or Java.
  • Good understanding of the AWS data services (DynamoDB, RDS, Data Pipeline, EMR, Lambda, Glue, ECS, etc.) and cloud data warehouses like SnowFlake.
  • Must have good knowledge of performance tuning, optimization and debugging of data pipelines.
  • Experience with enterprise data and business intelligence platforms serving large-scale enterprise deployments.
  • Proven track record of leading and delivering complex data projects within cloud environments.
  • Good understanding of distributed computing and frameworks like Apache Spark, Hadoop, and Apache Kafka for handling large volumes of data.
  • Good problem-solving, leadership, and communication skills.
  • Good understanding of software engineering principles and standards.

What Can Help Your Application Stand Out:

  • Experience with Docker/Kubernetes.
  • Proficiency in data modeling, schema design, and optimizing data structures for performance in Snowflake.
  • Working experience in startup environments.
  • Experience with Agile process methodology, CI/CD automation, Test Driven Development.
  • Knowledge of data governance, security, and compliance standards within cloud-based data solutions.
  • Understanding of any reporting tools such as Tableau, Qlikview ,Looker or PowerBI.
  • Database administration background.

Our Commitment to Inclusivity and Diversity

At G2, we are committed to creating an inclusive and diverse environment where people of every background can thrive and feel welcome. We consider applicants without regard to race, color, creed, religion, national origin, genetic information, gender identity or expression, sexual orientation, pregnancy, age, or marital, veteran, or physical or mental disability status. Learn more about our commitments here.

Company Info.

G2 Crowd

G2.com (formerly G2 Crowd) is a peer-to-peer review site[1] headquartered in Chicago, Illinois. It was known as G2 Labs, Inc. until 2013.[2] The company was launched in May 2012 by former BigMachines employees, with a focus on aggregating user reviews for business software.

  • Industry
    Information Technology
  • No. of Employees
    575
  • Location
    Chicago, Illinois, United States
  • Website
  • Jobs Posted

Get Similar Jobs In Your Inbox

G2 Crowd is currently hiring Senior Data Engineer Jobs in Bangalore, Karnataka , India with average base salary of ₹90,000 - ₹250,000 / Month.

Similar Jobs View More