Elevate Your Career: Discover Exciting Opportunities in our Latest Job Openings
Hyderabad/Mangalore
Full Time
Posted Date: 21-06-2024
We are seeking a Hadoop DBA position that will provide expert-level support for the Hadoop cluster environment. The Ideal candidate will be articulate, approachable and have practical/Working Knowledge of Clusters, Log monitoring, Database Replications, and high availability of resources. This position also assists the Development staff in planning and execution by recommending solutions to complex data issues while keeping the overall health of the system in mind. Must be able to troubleshoot time-sensitive production issues in a timely manner.
Hyderabad/Mangalore
6 to 10 years
Full Time
Bachelor’s degree in computer science or equivalent degree/experience and relevant working experience. Relevant IT related training/certifications
Responsible for implementation and ongoing administration of Hadoop infrastructure.
Working with data delivery teams to set up new Hadoop users. This job includes setting up Kerberos principals and testing HDFS, Hive, Pig and Spark, Impala, MapReduce, and Hue and configuring access for the new users.
Cluster maintenance as well as creation and removal of nodes using tools like Ganglia, Nagios, Cloudera Manager Enterprise, and other tools.
Diligently teaming with the infrastructure, network, database, application, and business intelligence teams to guarantee with high data quality and availability.
Performance tuning of Hadoop clusters and Hadoop MapReduce routine.
Monitor Hadoop cluster and their connectivity and security.
Manage and review Hadoop log files.
File system management and monitoring.
HDFS support and maintenance.
Point of Contact for Vendor escalation.
Collaborating with application teams to install operating system and Hadoop updates, patches, version upgrades when required.
Screen Hadoop cluster job performance and capacity planning.
Disk space management.
Data modelling, design & implementation based on recognized standards.
Database backup and recovery.
Automate manual tasks.
Superior knowledge in administering Hadoop operating on LINUX environment.
Expertise in Setting up Clusters, Configuring High availability.
Ability to accurately monitor environments, resolve issues and plan capacity improvements.
Solid understanding of storage environments used by Big Data systems.
Knowledge of cluster monitoring tools like Ambari, Cloudera SCM, Ganglia, or Nagios
Knowledge of Hadoop Ecosystem Components such as HDFS, YARN, Hive, Impala, Hue, Pig
Good Understanding of OS Concepts, network configuration, Process management and resource scheduling.
Good Knowledge on Different Cloud environments.
Highly self-motivated and excellent time management skills. Strong interpersonal and communication skills.
6-10 years with Hadoop technologies and all aspects of Big Data Enterprise Grade solutions
Proven experience with the full stack of Hadoop Eco system, e.g., Hive, Spark, Isilon, Kerberos, etc...
Experience working in secure environments (PCI, HIPPA, TS-SCI, etc.)
• Experience with programming languages, e.g., Python, Java, scripting.
Demonstrates collaboration with multiple teams for successful solutions.
The expertise of general operations, including troubleshooting
Adheres to standards and guidelines.
Flexible work hours and provides 24x7 support for critical production systems.
Continues to improve skill sets.