(Remote) Specialist, Data Engineer (Docker/Kubernetes)

Apply Now

Job Description

If you’re passionate about innovation and love working in an environment where you can constantly improve and adopt new technologies to drive business results, then Nationwide’s Information Technology team could be the place for you!

The team supports an application suite that provides several foundational Advanced Analytic capabilities that support all enterprise analytical needs. It involves supporting Enterprise Analytic office and all citizen analytic developers across organizations for end to end analytical model development, deployments through state of the art in house patented and automated analytical model pipeline.

This role works in collaboration within various IT teams like Enterprise Data Office , EAO, IRM and Cloud Data Tower. The role faces off with Data Scientist community for their Developmental needs pertaining to automated pipeline support.

This role supports build, deploy, run support and monitoring of Predictive model builds for the enterprise.

We are looking for an experience Software Engineer with following skills:

DevOps Principles (Jenkins, Harness) Working knowledge in LINUX Kubernetes AWS Engineering Skillset (e.g S3, EC2, EMR, Sagemaker, FsX and more ) Database support Mgmt and SQL knowledge Experience with ETL tools like Informatica/ IICS/ Databricks AI/ML skillset including programming languages like Python, R Experience with APIs , APIGEE programming skills Agile Development principles.

Compensation grade F

Job Description Summary

Nationwide’s industry leading workforce is passionate about creating data solutions that are secure, reliable and efficient in support of our mission to provide extraordinary care. Nationwide embraces an agile work environment and collaborative culture through the understanding of business processes, relationship entities and requirements using data analysis, quality, visualization, governance, engineering, robotic process automation, and machine learning to produce targeted data solutions. If you have the drive and desire to be part of a future forward data enabled culture, we want to hear from you.

Are you a go-getter, a team player, have an innovative mindset? Do you have a passion for Data and working with cutting edge technologies? Would you love to work with a team that is high performing, fun and enjoys collaborating to get the job done? If so, we would love to have you as part of the tech minions!

The team supports an application suite that provides several foundational Advanced Analytic capabilities for all enterprise analytical needs. It involves analytical model development and deployments through state of the art in house patented and automated analytical model pipelines.

  • The ideal candidate will have the following characteristics:
  • Solid communication, people interaction, and leadership skills
  • Thought leadership with a strong ability to analyze and troubleshoot
  • Work closely with development teams to ensure that design specifications are implemented
  • Well versed in data engineering and analytics
  • Knowledge of modern integration patterns (streaming/APIs) and containerized solutions (Docker / Kubernetes)

Technical skills:

  • Agile software development methodology
  • DevOps Tools (Jenkins, Harness)
  • Working knowledge in LINUX
  • Kubernetes (CNP)
  • AWS Services (e.g S3, EC2, EMR, Sagemaker)
  • Database support Mgmt
  • SQL knowledge Experience with ETL tools like:
  • Python, R Experience with APIs, APIGEE programming skills

Job Description

Key Responsibilities:

  • Provides basic to moderate technical consultation on data product projects by analyzing end to end data product requirements and existing business processes to lead in the design, development and implementation of data products.
  • Produces data building blocks, data models, and data flows for varying client demands such as dimensional data, standard and ad hoc reporting, data feeds, dashboard reporting, and data science research & exploration
  • Translates business data stories into a technical story breakdown structure and work estimate so value and fit for a schedule or sprint is determined.
  • Creates simple to moderate business user access methods to structured and unstructured data by such techniques such as mapping data to a common data model, NLP, transforming data as necessary to satisfy business rules, AI, statistical computations and validation of data content.
  • Assists the enterprise DevSecOps team and other internal organizations on CI/CD best practices experience using JIRA, Jenkins, Confluence etc.
  • Implements production processes and systems to monitor data quality, ensuring production data is always accurate and available for key stakeholders and business processes that depend on it.
  • Develops and maintains scalable data pipelines for both streaming and batch requirements and builds out new API integrations to support continuing increases in data volume and complexity.
  • Writes and performs data unit/integration tests for data quality With input from a business requirements/story, creates and executes testing data and scripts to validate that quality and completeness criteria are satisfied. Can create automated testing programs and data that are re-usable for future code changes.
  • Practices code management and integration with engineering Git principle and practice repositories.

May perform other responsibilities as assigned.

  • Reporting Relationships: Reports to Manager/Director Data Leader.
  • Education: Undergraduate studies in computer science, management information systems, business, statistics, math, a related field or comparable experience and education strongly preferred. Graduate studies in business, statistics, math, computer science or a related field are a plus.
  • License/Certification/Designation: Certifications are not required but encouraged.
  • Experience: Three to five years of relevant experience with data quality rules, data management organization/standards, practices and software development. Experience in data warehousing, statistical analysis, data models, and queries. One to three years’ experience with Cloud technology and infrastructure including security and access management. Insurance/financial services industry knowledge a plus.
  • Knowledge, Abilities and Skills: Data application and practices knowledge. Moderate to advanced skills with modern programming and scripting languages (e.g., SQL, R, Python, Spark, UNIX Shell scripting, Perl, or Ruby). Good problem solving, oral and written communication skills.
  • Other criteria, including leadership skills, competencies and experiences may take precedence.
  • Staffing exceptions to the above must be approved by the hiring manager’s leader and HR Business Partner.
  • Values: Regularly and consistently demonstrates the Nationwide Values.
  • ADA: The above statements cover what are generally believed to be principal and essential functions of this job. Specific circumstances may allow or require some people assigned to the job to perform a somewhat different combination of duties.

Company Info.


Nationwide Mutual Insurance Company and affiliated companies is a group of large U.S. insurance and financial services companies based in Columbus, OH. The company also operates regional headquarters in Scottsdale, AZ; Des Moines, IA; San Antonio, TX; Gainesville, FL; Raleigh, NC; Sacramento, CA, and Westerville, OH. Nationwide currently has approximately 34,000 employees, and is ranked #73 in the 2019 Fortune 500 list.

  • Industry
  • No. of Employees
  • Location
    One Nationwide Plaza, West Nationwide Boulevard, Columbus, Ohio, USA
  • Website
  • Jobs Posted

Get Similar Jobs In Your Inbox

Nationwide is currently hiring Data Engineer Specialist Jobs in Columbus, OH, USA with average base salary of $120,000 - $190,000 / Year.

Similar Jobs View More