Job Id: 20201016004
Job Role: Big Data Developer
Experience: 3+ Years
Job Location: Pune
Salary: Best in Industry
Vacancies: Not Mentioned
Job Description Amdocs Careers Job Vacancies for Big Data Developer in October 2020:
Responsible for design, develop, modify, debug and/or maintain software systems.
Responsible for one or more specific modules within a large software system scope
What will your job look like?
You will design, develop, modify, debug and/or maintain software code according to functional, non-functional and technical design specifications.
You will follow Amdocs software engineering standards, applicable software development methodology and release processes, to ensure code is maintainable, scalable , supportable and demo the software products to stakeholders
You will investigate issues by reviewing/debugging code, provides fixes and workarounds, and reviews changes for operability to maintain existing software solutions.
You will work within a team, collaborate and add value through participation in peer code reviews, provide comments and suggestions, work with cross functional teams to achieve goals.
You will assume technical accountability for your specific work products within an application and provide technical support during solution design for new requirements.
You will be encourage to actively look for innovation and continuous improvement, efficiency in all assigned tasks.
All you need is…
Perform Development & Support activites for Data warehousing domain using Big Data Technologies
Understand High Level Design, Application Interface Design & build Low Level Design. Perform application analysis & propose technical solution for application enhancement or resolve production issues
Perform Development & Deployment. Should be able to Code, Unit Test & Deploy
Creation necessary documentation for all project deliverable phases
Handle Production Issues (Tier 2 Support, weekend on-call rotation) to resolve production issues & ensure SLAs are met
Should have very clear understanding of hadoop architecture
3+ Hands on in Hadoop(Hbase,HDFS,Pig,Hive,Map-reduce)
Experience in SVN,Build Tools like Ant,Maven etc.
3+ Hands on in SQL, Unix & advanced Unix Shell Scripting
Hands on file transfer mechanism (NDM, sFTP etc)
Knowledge of Schedulers
Good to have
working knowledge of Kafka, Storm, Spark.
should have working experience of working on datalake environments,.
should have handled xml, json, structured, fixed-width, un-structured files using custom MR/Pig/Hive, strong analytical thinking.
Willingness to learn all data warehousing technologies & work out of the comfort zone in other ETL technologies (Datastage, Oracle, Mainframe etc). Hands on working experience is a plus