Data Engineer

Dolby
Apply Now

Job Description

Join the leader in entertainment innovation and help us design the future. At Dolby, science meets art, and high tech means more than computer code. As a member of the Dolby team, you’ll see and hear the results of your work everywhere, from movie theaters to smartphones. We continue to revolutionize how people create, deliver, and enjoy entertainment worldwide. To do that, we need the absolute best talent. We’re big enough to give you all the resources you need, and small enough so you can make a real difference and earn recognition for your work. We offer a collegial culture, challenging projects, and excellent compensation and benefits, not to mention a Flex Work approach that is truly flexible to support where, when, and how you do your best work.

Play a key role as part of Dolby's new R+D Center in Bangalore as a Data Engineer in our Advanced Technology Group ATG. ATG is the research and technology arm of Dolby Labs. It has multiple competencies that innovate technologies in audio, video, AR/VR, gaming, music, and movies. Many areas of expertise related to computer science and electrical engineering, such as AI/ML, computer vision, image processing, algorithms, digital signal processing, audio engineering, data science & analytics, distributed systems, cloud, edge & mobile computing, natural language processing, knowledge engineering and management, social network analysis, computer graphics, image & signal compression, computer networking, IoT are highly relevant to our research. 

Responsibilities:

As a Data Engineer, you’ll be a part of a growing engineering team building and designing our core data infrastructure for our internal technology research and development efforts. You’ll have the chance to partner closely with our research and data science teams to understand data and functional requirements. We are looking for an experienced data professional who is a problem solver, logical thinker and passionate about everything relating to data and analytics. Your responsibilities include:

  • Create and maintain optimal data pipeline architecture for data coming from different sources, in various formats and of different content type (text, audio, video etc.) allowing to standardize, clean and ingest data.
  • Identify, design, and implement internal process improvements: automating manual processes, optimizing data delivery, re-designing infrastructure for greater scalability, etc.
  • Design and develop solutions which are scalable, generic and reusable. Be responsible for collecting, storing, processing, and analyzing huge sets including, but not limited audio, video, and metadata.
  • Develop techniques to analyze and enhance both structured/unstructured data and work with big data tools and frameworks.
  • Build the infrastructure required for optimal extraction, transformation, and loading of data from a wide variety of data sources using SQL, Databricks, and AWS ‘big data’ technologies.
  • Create data tools for research and data scientist teams.

What You Bring to the Role

  • BsC/Msc degree in CS or EE. Work experienced desired, but not required.
  • Experience building and optimizing streaming big data pipelines, architectures, and data sets.
  • Deep understanding data pipeline frameworks including Databricks and Fivetran.
  • Build processes supporting data transformation, data structures, metadata, dependency, and workload management.
  • Experience or solid theoretical understanding of data workflows including:
  • Ingestion
  • Batch and stream processing
  • Storage and archiving
  • Visualization/Reporting and Dashboards
  • Experience performing root cause analysis on internal and external data and processes to answer specific business questions and identify opportunities for improvement.
  • Understanding of the current state of infrastructure automation, continuous integration/deployment - CI/CD, SQL/NoSQL, security, networking, and cloud-based delivery models.
  • In-depth understanding of:
  • NoSql databases (Kafka, HBase, Spark, Hadoop ,Cassandra, MongoDb etc). SQL development and any procedural extension language (T-SQL, PL/SQL, Pg/PLSQL etc.)
  • Data pipeline and workflow management tools: Azkaban, Luigi, Airflow, etc.
  • Distributed data processing frameworks like Apache Spark, Apache Flink
  • Scalable ML pipelines for image, video and audio modalities with tools such as Flyte, MLflow, Prefect, or AirFlow
  • Data collection, labeling, cleaning, and generation tools such as LabelBox, SuperAnnontate, Scale Ai, or V7
  • Scripting abilities with two or more general purpose programming languages including but not limited to Java, C/C++, C#, Objective C, Python, JavaScript.
  • Data modeling and extraction of data from different sources
  • Strong documentation skills, communication and client facing Experience
  • Experience supporting and working with cross-functional teams in a dynamic environment.

Company Info.

Dolby

Dolby Laboratories, Inc. is an American company specializing in audio noise reduction, audio encoding/compression, spatial audio, and HDR imaging. Dolby licenses its technologies to consumer electronics manufacturers.

  • Industry
    Entertainment
  • No. of Employees
    2,368
  • Location
    Civic Center, San Francisco, CA, USA
  • Website
  • Jobs Posted

Get Similar Jobs In Your Inbox

Dolby is currently hiring Data Engineer Jobs in Bangalore, Karnataka, India with average base salary of ₹50,000 - ₹150,000 / Month.

Similar Jobs View More