Data Engineer + API Developer

Kyndryl Holdings, Inc.
Apply Now

Job Description

Our world has never been more alive with opportunities and, at Kyndryl, we’re ready to seize them. We design, build, manage and modernize the mission-critical technology systems that the world depends on every day. Kyndryl is at the heart of progress — dedicated to helping companies and people grow strong. Our people are actively discovering, co-creating, and strengthening. We push ourselves and each other to seek better, to go further, and we carry this energy to our customers. At Kyndryl, we want you to keep growing, and we’ll provide plenty of opportunities to make that happen. Please be aware that we have the Kyndryl candidate zone hosted by IBM for a certain period. If you have applied for an IBM role previously, you will be able to log into the candidate zone using your previous IBM log in details. When in the candidate zone, you will be able to see your previous applications for both IBM and Kyndryl. For further information on privacy, please visit www.kyndryl.com/privacy.

Your Role and Responsibilities

As a Data Engineer/API developer you are expected to be functionally knowledgeable in deploying and managing AI/ML models and APIs with strong emphasis on API development, API maintenance, model deployment and governance using cloud native services as well as 3rd party DSML platforms. This role in our Data and AI team, you will provide support for one or more projects; assist in defining scope and sizing of work; and work on Proof of Concept development. You will support the team in providing data engineering, model deployment and management solutions based on the business problem, integration with third party services, designing and developing complex model pipelines for clients' business needs. You will collaborate with some of the best talent in the industry to create and implement innovative high quality solutions, participate in Pre-Sales and various pursuits focused on our clients' business needs.

You will also contribute in a variety of roles in thought leadership, mentorship, systems analysis, architecture, design, configuration, testing, debugging, and documentation. You will challenge your leading edge solutions, consultative and business skills through the diversity of work in multiple industry domains.

Responsibilities

  • Design and build scalable machine learning services and data platforms
  • Develop Model Pipelines (DevOps) for model reusability and version controlling
  • Serve models in production leveraging serving engines such as Tensorflow, PyTorch, Seldon etc.
  • Analyze, designs, develops, codes and implements programs in one or more programming languages, for Web and Rich Internet Applications, Cloud Native and 3rd Party Applications.
  • Supports applications with an understanding of system integration, test planning, scripting, and troubleshooting
  • Defines specifications and develop programs, modifies existing programs, prepares test data, and prepares functional specifications.
  • Utilize benchmarks, metrics, and monitoring to measure and improve models
  • Develop integration with monitoring tools to detect model drift and alert - Prometheus, Grafana stack, Cloud native monitoring stack
  • Research, design, implement and validate cutting-edge deployment methods across hybrid cloud scenarios
  • Work with data scientists to implement Client, AI and NLP techniques for article analysis and attribution.
  • Support the build of complex AI/ML models and help in deploying them either on cloud or 3rd party DSML platforms
  • Containerize the models developed by Data Scientist and deploy them in Kubernetes/Container environments
  • Develop and maintain documentation of the Model flows and integrations, pipelines etc
  • Support the teams in providing technical solutions from model deployment and architecture perspective, ensure the right direction and propose resolution to potential model pipeline, deployment -related problems.
  • Developing Proof of concepts (PoC) of key technology components to project stakeholders
  • Collaborate with other members of the project team (Architects, Data Engineers, Data Scientists) to support delivery of additional project components
  • Evaluate and create PoVs around the performance aspects of DSML platforms and tools in the market against customer requirements
  • Working within an Agile delivery / DevOps methodology to deliver proof of concept and production implementation in iterative sprints.
  • Assist in driving improvements to the Enterprise AI technologies stack, with a focus on the digital experience for the user, as well as model performance & security to meet the needs of the business and customers, now & in the future
  • Support technical investigations & proof of concepts, both individually and as part of a team, including being hands - on with code, to make technical recommendations
  • Create documentation for Architecture Principles, design patterns & examples, technology roadmaps & future planning

Required Technical and Professional Expertise

Required Skills:

Python, Machine Learning Engineer and API development with 3-5 years of experience with following skills -

  • Strong DevOps, Data Engineering and Client background with AWS or GCP or Azure cloud
  • Experience with design and development of REST API platform using Apigee/APIM, converting web services from SOAP to REST or vice-versa.
  • Experience with Security frameworks (e.g., JWT, OATH2)
  • Experience in API layer like security, custom analytics, throttling, caching, logging, monetization, request and response modifications etc. using Apigee
  • Proficient in SQL and Stored Procedures such as in Oracle, MySQL
  • Experience with Unix, Linux Operating Systems
  • Experience with Scrum and other Agile processes.
  • Knowledge of Jira, Git/SVN, Jenkins
  • Experience in creating REST API documentation using Swagger and YAML or similar tools desirable
  • Experience with Integration frameworks (e.g., Mule, Camel) desirable
  • Experience with one or more of MLOps tools: ModelDB, Kubeflow, Pachyderm, and Data Version Control (DVC) etc
  • Experience in Distributed computing, Data pipelines, and AI/Client
  • Experience setting up and optimizing DBs for production usage for Client app context
  • Experience in Docker, Kubernetes (OpenShift, EKS, AKE, GKE, vanilla K8s), Jenkins, any CICD tool
  • Experience in Spark, Kafka, HDFS, Cassandra
  • Strong and handson knowledge in Python, Apache Spark, Kubernetes, PySpark
  • Hands-on expertise in at least 1 Data Science project - Model Training, Deployment on Hyperscalars - AWS, Azure, GCP
  • Experience in any of the following solutions - AWS Sagemaker, Azure ML or GCP Vertex AI or 3rd party solutions like H2O.ai, Datarobot, etc

Preferred Technical and Professional Experience

  • Python programmer
  • DevOps - CD/CI Implementations
  • Data Science skills - Model development, Training
  • API development
  • Strong knowledge of web services (WSDL Soap, Restful)
  • Strong knowledge of the java/pythonframeworks (Spring MVC, Spring Security etc)

Required Education

  • Bachelor's Degree

Preferred Education

  • Master's Degree

Being You @ Kyndryl

Kyndryl is committed to creating a diverse environment and is proud to be an equal opportunity employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, gender, gender identity or expression, sexual orientation, national origin, genetics, pregnancy, disability, age, veteran status, or other characteristics. Kyndryl is also committed to compliance with all fair employment practices regarding citizenship and immigration status.

Other things to know

When applying to jobs of your interest, we recommend that you do so for those that match your experience and expertise. Our recruiters advise that you apply to not more than 3 roles in a year for the best candidate experience.

For additional information about location requirements, please discuss with the recruiter following submission of your application.

Primary Job Category

  • Data Science

Role (Job Role)

  • Data Scientist

Employment Type

  • Full-Time

Contract Type

  • Regular

Position Type

  • Early Professional

Travel Required

  • No Travel

Company

  • (Y030) Kyndryl Solutions Private Limited

Is this role a commissionable / sales incentive based position

  • No

Company Info.

Kyndryl Holdings, Inc.

Kyndryl Holdings, Inc. is an American multinational information technology infrastructure services provider that designs, builds, manages and develops large-scale information systems. The company was created from the spin-off of IBM's infrastructure services business.

  • Industry
    Information Technology
  • No. of Employees
    90,000
  • Location
    New York, NY, USA
  • Website
  • Jobs Posted

Get Similar Jobs In Your Inbox

Kyndryl Holdings, Inc. is currently hiring Data Engineer Jobs in Bengaluru, Karnataka, India with average base salary of ₹600,000 - ₹1,000,000 / Year.

Similar Jobs View More