Senior Big Data Engineer (Senior Specialty Software Engineer)

Wells Fargo
Apply Now

Job Description

At Wells Fargo, we are looking for talented people who will put our customers at the center of everything we do. We are seeking candidates who embrace diversity, equity, and inclusion in a workplace where everyone feels valued and inspired.

Help us build a better Wells Fargo. It all begins with outstanding talent. It all begins with you.

Technology sets IT strategy; enhances the design, development, and operations of our systems; optimizes the Wells Fargo infrastructure; provides information security; and enables Wells Fargo global customers to have 24 hours a day, 7 days a week banking access through in-branch, online, ATMs, and other channels.

Our mission is to deliver stable, secure, scalable, and innovative services at speeds that delight and satisfy our customers and unleash the skills potential of our employees.

The Enterprise Functions Technology (EFT) group provides technology solutions and support for Risk, Audit, Finance, Marketing, Human Resources, Corporate Properties, and Stakeholder Relations business lines. In addition, EFT provides unique technology solutions and innovation for Wells Fargo Technology, Enterprise Shared Services, and Enterprise Data Management. This combined portfolio of applications and tools are continually engineered to meet the challenges of stability, security, scalability, and speed.

Within EFT the Corporate Risk Technology (CRT) group helps all Wells Fargo businesses identify and manage risk. We focus on three key risk areas: credit risk, operational risk, and market risk. We help our management and Board of Directors identify and monitor risks that may affect multiple lines of business and take appropriate action when business activities exceed the risk tolerance of the company.

The Calculation Services group in Corporate Risk Technology is seeking a Senior Big Data Engineer (Sr. Specialty Software Engineer) to work on building and supporting the Python Platform and SDK that runs the qualitative and quantitative risk models for multiple lines of business. The position will offer the opportunity to work on the latest open-stack technologies in Big Data / Python universe.

We make extensive use of Spark, Rest API’s, Django, Django DRF, React JS to develop and maintain a comprehensive SDK and Framework to enable self-service development, deployment, and end to end Batch and Real time Forecasting solutions available to our business. While we focus on integrating with Open-Source Apache and Linux Foundation AI & Data products, we also integrate with the latest commercial solutions like AtScale, Dremio, H2O.AI, Tableau and more through an API first integration strategy.

Responsibilities include:

  • Standing up cutting-edge analytical capabilities, leveraging automation, cognitive and science-based techniques to manage data and models, and drive operational efficiency by offering continuous insights and improvements.
  • Help in the design and implementation of algorithms and tools for analytics and data scientist teams.
  • Use a variety of languages, tools and frameworks to unify data and systems.
  • Collaborate with modelers, developers, DevOps and project managers to attain project goals.
  • Strong understanding of Python code, CI/CD deployment and test automation suites.
  • Drive a culture of automation, test coverage.
  • Architect for Micro Services, API, Cloud Native and Headless Architecture – Decoupling the frontend and backend of the technology stack.

Required Qualifications, US:

  • 4+ years of Specialty Software Engineering experience, or equivalent demonstrated through one or a combination of the following: work experience, training, military experience, education
  • 4+ years of experience in Python
  • 2+ years of experience in creating APIs using python, preferably Django and DRF
  • 2+ years of experience with Big Data or Hadoop tools such as Spark, Hive, Kafka and Map
  • 2+ years of experience in data science python libraries like NumPy, Pandas and SciPy
  • 2+ years of experience in PySpark, HDFS and distributed computing
  • 2+ years of experience in H2O software or Keras with TensorFlow
  • 2+ years of experience and training related to configurations with Cloud service providers such as Amazon Web Services (AWS), Google Cloud Platform (GCP) or MS Azure

Desired Qualifications:

  • A Master’s degree or higher in Computer Science or related field
  • Knowledge and understanding of DevOps principles
  • Leadership skills to drive the work stream technically
  • In-depth understanding on story estimations, design reviews, code reviews and quality code delivery
  • 2+ year of experience in Oracle
  • 2+ years of experience in Kubernetes
  • 2+ years of experience with building, deploying, and securing cloud platforms.
  • Experience designing & building reusable solutions to in data work stream of model development life cycle
  • Extensive understanding of banking domain with emphasis on risk and finance forecasting

Job Expectations:

  • Ability to travel up to 10% of the time

Company Info.

Wells Fargo

Wells Fargo & Company is an American multinational financial services company with corporate headquarters in San Francisco, California, operational headquarters in Manhattan, and managerial offices throughout the United States and internationally. The company has operations in 35 countries with over 70 million customers globally. It is considered a systemically important financial institution by the Financial Stability Board.

Get Similar Jobs In Your Inbox

Wells Fargo is currently hiring Senior Big Data Engineer Jobs in Springfield, NJ, USA with average base salary of $160,000 - $240,000 / Year.

Similar Jobs View More