AWS Cloud Big Data Architect

Impetus
Apply Now

Job Description

Do you want to work with the best minds in the industry, create high-performance scalable solutions?

Do you want to be part of the team that is solutioning next-gen data platforms?

Then this is the place for you. You want to architect and deliver solutions involving data engineering on Petabyte scale of data, that solve complex business problems

  • 12+ years of experience in the role of implementation of high end software products.
  • Provides technical leadership in Big Data space (Hadoop Stack like M/R, HDFS, Hive, HBase, HBase etc) across Engagements and contributes to open source Big Data technologies.

Must have :

  • Operating knowledge of cloud computing platforms (AWS, especially Redshift, Glue, DynamoDB, EMR, EC2, SWF, S3 services and the AWS CLI)
  • Should be aware with columnar database e.g parquet, ORC etc
  • Visualize and evangelize next generation infrastructure in Big Data space (Batch, Near Real-time, Real-time technologies).
  • Passionate for continuous learning, experimenting, applying and contributing towards cutting edge open source technologies and software paradigms
  • Developing and implementing an overall organizational data strategy that is in line with business processes. The strategy includes data model designs, database development standards, implementation and management of data warehouses and data analytics systems
  • What Is a Data Warehouse: Overview, Concepts and How It Works
  • What is a data warehouse & its benefits? As a core component of business intelligence, learn data warehouse architecture, database vs data warehouse, & more.
  • Expert-level proficiency in at-least one of Java
  • Strong understanding and experience in distributed computing frameworks, particularly Apache Hadoop 2.0 (YARN; MR & HDFS) and associated technologies -- one or more of Hive, Sqoop, Avro, Flume, Oozie, Zookeeper, etc.
  • Hands-on experience with Apache Spark and its components (Streaming, SQL, MLLib) 

Must have :

  • Operating knowledge of cloud computing platforms (AWS, especially Redshift, Glue, DynamoDB, EMR, EC2, SWF, S3 services and the AWS CLI)
  • Experience working within a Linux computing environment, and use of command line tools including knowledge of shell/Python scripting for automating common task

Department : Cloud & Data Engineering

Open Positions : 2

Skills Required : AWS, Spark, Python, Bigdata

Role :

  • Evaluate and recommend Big Data technology stack best suited for customer needs
  • Drive significant technology initiatives end to end and across multiple layers of architecture
  • Provides strong technical leadership in adopting and contributing to open source technologies related to BigData across multiple engagements
  • Designing /architecting complex , highly available , distributed , failsafe compute systems dealing with considerable amount (GB/TB) of data
  • Identify and work upon incorporating Nonfunctional requirements into the solution (Performance , scalability , monitoring etc.) 

Education/Qualification :BE / B.Tech / MCA / M.Tech / M.Com

Years Of Exp : 10 to 15 years

Company Info.

Impetus

Impetus is focused on creating big business impact through big data solutions for Fortune 1000 enterprises. The company offers a unique mix of software products, consulting services, data science capabilities, and technology expertise. Leverage our big data solutions for automated warehouse transformation (Impetus Workload Transformation Solution), real-time streaming and batch analytics (StreamAnalytix,) and rapid application development on Spar

  • Industry
    Information Technology
  • No. of Employees
    2,005
  • Location
    720 University Avenue, Los Gatos, CA 95032, USA
  • Website
  • Jobs Posted

Get Similar Jobs In Your Inbox

Impetus is currently hiring Big Data Architect Jobs in Gurugram, Haryana, India with average base salary of ₹90,000 - ₹250,000 / Month.

Similar Jobs View More