Hadoop Developer Jobs in Bengaluru & Pune | Capgemini Careers

Hadoop Developer

 

Developing distributed computing Big Data applications using Spark, Elastic Search on MapR Hadoop
The Hadoop Developer is responsible for designing, developing, testing, tuning and building a large-scale data processing system, for Data Ingestion and Data products that allow the Client to improve quality, velocity and monetization of our data assets for both Operational Applications and Analytical needs
Strong work experience on Hadoop distributed computing framework (including apache Spark)
Very strong hold over one or more Object Oriented Programming Languages (e.g. Spark, Scala, Python)
Experience in hosted PaaS cloud environment – (AWS, Azure or GCP)
Scripting skills – Shell or Python
Knowledge in DWH concepts
In-depth understanding of ANY of the relational database systems
Strong UNIX Shell scripting experience to support data warehousing solutions

Primary skills
Spark/Scala
Python
Good experience in Hadoop/hive eco system

Secondary skills
Willingness to adapt to new technologies
Understanding of the ML is added advantage
Strong communication skills
In-depth understanding of ANY of the relational database systems
Ability to deliver the tasks independently

Location: Bengaluru & Pune

 

Apply for the Job

Leave a Reply

Your email address will not be published. Required fields are marked *

%d bloggers like this: