DXC Technology is Hiring for Big Data Engineer

Job Overview

  1. DXC Technology is hiring a Big Data Engineer-3 for its Data Sciences team in Bangalore.
  2. This full-time position is best suited for professionals with hands-on experience in Hadoop, Spark, Kafka, and large-scale distributed systems.
  3. The role involves building and optimizing data pipelines, working with real-time streaming data, and contributing to scalable big data architectures.
  4. Candidates should have a strong background in Python, PySpark, Linux/Unix systems, and SQL scripting.
  5. This is an excellent opportunity to work on advanced data engineering projects using modern tools and technologies in a high-impact environment.

Job Details

  1. Company: DXC Technology
  2. Job Position: Big Data Engineer-3
  3. Location: Bangalore, India
  4. Category: Data Sciences
  5. Job ID: 51542603
  6. Posted Date: 04/07/2025
  7. Job Type: Full-Time
  8. Contract Type: Permanent

Key Responsibilities

  1. Design and implement large, scalable distributed systems for data processing.
  2. Develop ETL pipelines using Apache NiFi and related technologies.
  3. Work on real-time data ingestion and processing using Kafka and Spark.
  4. Perform debugging and troubleshooting of big data jobs on Hadoop and Hive.
  5. Collaborate with data scientists and analysts to ensure data accuracy and integrity.
  6. Maintain process documentation and create detailed design specifications.
  7. Utilize Cloudera tools for Hadoop administration and job monitoring.
  8. Debug production issues using command-line tools and logs.
  9. Support the deployment of data pipelines in cloud or hybrid environments.
  10. Follow industry best practices for performance optimization and fault tolerance.

Required Skills and Knowledge

  1. Strong programming skills in Python, PySpark, Java, and/or Scala.
  2. Expertise in big data technologies such as Hadoop, HDFS, Yarn, MapReduce, Spark, Hive, and Impala.
  3. Solid experience with ETL tools, particularly Apache NiFi.
  4. Proficiency in Linux/Unix environments and shell scripting.
  5. Sound understanding of streaming frameworks like Apache Kafka.
  6. In-depth knowledge of SQL and data manipulation techniques.
  7. Ability to troubleshoot and resolve failures in Hive and Hadoop environments.
  8. Familiarity with Cloudera platform for Hadoop cluster administration.
  9. Experience designing fault-tolerant and scalable data architectures.
  10. Strong communication and documentation skills.
READ ALSO:  Kaplan is Hiring for Customer Specialist

Preferred Skills

  1. Experience with cloud technologies such as AWS, Azure, GCP, or Databricks.
  2. Familiarity with CI/CD practices in data engineering pipelines.
  3. Exposure to data security and governance best practices.
  4. Ability to integrate structured and unstructured data sources.
  5. Experience working in agile, cross-functional teams.

About DXC Technology

DXC Technology is a global leader in IT services and consulting, helping clients harness the power of innovation to deliver business transformation. The company offers technology solutions that drive efficiency, scalability, and digital resilience. Operating across more than 70 countries, DXC provides cutting-edge services in cloud computing, analytics, cybersecurity, and enterprise platforms. DXC empowers its employees with continuous learning, collaborative projects, and career growth opportunities across a broad range of technologies.

Why Join DXC Technology

  1. Work with global leaders in big data, AI, and enterprise technologies.
  2. Engage in challenging projects that drive business transformation.
  3. Collaborate with talented teams in a multicultural environment.
  4. Access continuous learning and technical certification programs.
  5. Gain hands-on experience with the latest cloud and big data tools.
  6. Be part of a company committed to integrity, excellence, and innovation.
  7. Explore diverse career paths within DXC’s global ecosystem.
  8. Contribute to solving real-world data challenges that matter.

How to Prepare for the Role

  1. Deepen your understanding of Hadoop, Spark, and Kafka ecosystems.
  2. Practice hands-on coding in PySpark and Python for data transformations.
  3. Set up and run sample NiFi workflows for ETL tasks.
  4. Review system debugging techniques using Unix/Linux command line.
  5. Study cloud platforms and big data services such as AWS EMR or Azure Databricks.
  6. Familiarize yourself with Cloudera admin tools and job monitoring dashboards.
  7. Work on optimizing Hive queries and managing schema changes.
  8. Prepare to showcase your project experience and problem-solving approach in interviews.
READ ALSO:  Adobe is hiring for Software Quality Engineer

Big Data Engineering Career Tips

  1. Stay updated with emerging trends in real-time data processing.
  2. Invest in certifications for Hadoop, Spark, or cloud platforms.
  3. Join communities focused on big data and distributed computing.
  4. Document your data engineering projects on GitHub.
  5. Build your knowledge of data security and governance frameworks.
  6. Contribute to open-source projects or forums.
  7. Practice designing scalable data solutions from scratch.
  8. Learn how to integrate machine learning workflows with data pipelines.

Important Links

Apply link: Click here

    Leave a Comment