Big Data Engineer | Analytics | Banking & Financial Services

EXL Service
Apply Now

Job Description

EXL Analytics provides data-driven, action-oriented solutions to business problems through statistical data mining, cutting edge analytics techniques and a consultative approach. Leveraging proprietary methodology and best-of-breed technology, EXL Analytics takes an industry-specific approach to transform our clients’ decision making and embed analytics more deeply into their business processes. Our global footprint of nearly 2,000 data scientists and analysts assist client organizations with complex risk minimization methods, advanced marketing, pricing and CRM strategies, internal cost analysis, and cost and resource optimization within the organization. EXL Analytics serves the insurance, healthcare, banking, capital markets, utilities, retail and e-commerce, travel, transportation and logistics industries.

Please visit www.exlservice.com for more information about EXL Analytics.

Responsibilities:

  • Design, build and deploy data pipelines (batch and streaming) in the data lake using Hadoop technology stacks and programming languages such as Hive, PySpark, Python, Spark, Spark Streaming
  • Design, build and deploy error handling, data reconciliation, audit log monitoring, job scheduling using PySpark
  • Design complex algorithm and apply machine learning and statistical methods on large datasets for reporting, predictive and prescriptive modeling
  • Develop and implement coding best practices using Spark, Python, and PySpark
  • Collaborate with offshore and onshore team and effectively communicate status, issues, and risks daily
  • Review current enterprise data architecture and ensure the solution aligns with the enterprise architecture standards and best practices
  • Develop data model and structure for the data lake to ensure alignment with the data domain, integration needs and efficient access to the data
  • Review and propose new standards for naming, describing, managing, modeling, cleansing, enriching, transforming, moving, storing, searching and delivering all data within the enterprise
  • Analyze existing and future data requirements, including data volumes, data growth, data types, latency requirements, data quality, the volatility of source systems, and analytic workload requirements

Qualifications:

  • Required - Pyspark, Spark, Hadoop
  • Required – Hive, Unix, Shell scripting
  • Master’s or Bachelor's degree in math, statistics, economics, computer engineering or related analytics field from top-tier universities with strong record of achievement.
  • 3+ years hands-on experience working with Big Data Platforms such as Cloudera, Hortonworks, or MapR
  • 2+ years hands-on experience working with Hadoop ecosystems including HIVE, HDFS, MapReduce, Spark, Yarn, Oozie, Sqoop, Kafka, Flume etc.
  • 2+ years hands-on experience working with scripting/programming languages such as PySpark, Python, Scala, Pig, and Java to write batch and streaming jobs
  • 2+ experience with NOSQL database systems, including HBase, Cassandra, or a similar data store, implementing and using Elastic search or a similar indexer
  • Experience with ETL and ELT approaches of data ingestion and integration in the Hadoop ecosystems
  • Experience with using the Agile approach to deliver solutions
  • Experience with Spark Streaming, Solr, Storm and Kafka
  • Experience with implementing rapid response query
  • Experience with handling large and complex data in Big Data Environment
  • Experience with designing and developing complex data ingestions and transformation routines
  • Deep understanding of Data Warehouse and Data Lake design, standards and best practices
  • Experience with data architecture, data modeling, data design, data governance and data quality process and best practices
  • Experience of working in financial services and risk analytics domain, a plus
  • Strong record of achievement, solid analytical ability, and an entrepreneurial hands-on approach to work
  • Outstanding written and verbal communication skills

What we offer:

  • EXL Analytics offers an exciting, fast paced and innovative environment, which brings together a group of sharp and entrepreneurial professionals who are eager to influence business decisions. From your very first day, you get an opportunity to work closely with highly experienced, world class analytics consultants. You can expect to learn many aspects of businesses that our clients engage in. You will also learn effective teamwork and time-management skills - key aspects for personal and professional growth
  • Analytics requires different skill sets at different levels within the organization. At EXL Analytics, we invest heavily in training you in all aspects of analytics as well as in leading analytical tools and techniques.
  • We provide guidance/ coaching to every employee through our mentoring program wherein every junior level employee is assigned a senior level professional as advisors.
  • Sky is the limit for our team members. The unique experiences gathered at EXL Analytics sets the stage for further growth and development in our company and beyond.

Company Info.

EXL Service

EXL fuses digital technology and deep industry expertise, human expertise and artificial intelligence to deliver Digital Intelligence that helps your business grow faster and more profitably. See What Digital Intelligence Can Do. EXL Service is a leading operations management and analytics company that helps our clients build and grow sustainable businesses. By orchestrating our domain expertise, data, analytics and digital technology.

  • Industry
    Information Technology
  • No. of Employees
    22,011
  • Location
    320 Park Avenue, New York, NY 10022, USA
  • Website
  • Jobs Posted

Get Similar Jobs In Your Inbox

EXL Service is currently hiring Big Data Engineer Jobs in Pittsburgh, PA, USA with average base salary of $120,000 - $190,000 / Year.

Similar Jobs View More