Big Data Engineer


Apply Now

Big Data Engineer

12 Month Contract

Atlanta, GA (Onsite)


Synergis’ client has engaged us in a search for a Big Data Engineer.  The candidate must have Big Data engineering experience and must demonstrate an affinity for working with others to create successful solutions.  Join a smart, highly skilled team with a passion for technology, where you will work on our state-of-the-art Big Data Platforms (Cloudera).  They must be a very good communicator, both written and verbal, and have some experience working with business areas to translate their business data needs and data questions into project requirements.  The candidate will participate in all phases of the Data Engineering life cycle and will independently and collaboratively write project requirements, architect solutions, and perform data ingestion development and support duties. 

Big Data Engineer | Education Requirements

  • Bachelor’s Degree required.  Preferably in Information Systems, Computer Science, Computer Information Systems, or related field

Big Data Engineer | Requirements

  • 6+ years of overall IT experience
  • 3+ years of experience with high-velocity high-volume stream processing: Apache Kafka and Spark Streaming
  • Experience with real-time data processing and streaming techniques using Spark structured streaming and Kafka
  • Deep knowledge of troubleshooting and tuning Spark applications
  • 3+ years of experience with data ingestion from Message Queues (Tibco, IBM, etc.) and different file formats across different platforms like JSON, XML, CSV
  • 3+ years of experience with Big Data tools/technologies like Hadoop, Spark, Spark SQL, Kafka, Sqoop, Hive, S3, HDFS, or Cloud platforms e.g., AWS, GCP, etc.
  • 3+ years of experience building, testing, and optimizing ‘Big Data’ data ingestion pipelines, architectures, and data sets
  • 2+ years of experience with Kudu and Impala
  • 2+ years of experience with Scala (and/or Python) and PySpark/Scala-Spark
  • 2+ years of experience with NoSQL databases, including HBASE and/or Cassandra
  • Knowledge of Unix/Linux platform and shell scripting is a must
  • Strong analytical and problem-solving skills

Big Data Engineer | Additional Skills:

  • Experience with Cloudera/Hortonworks HDP and HDF platforms
  • Experience with NIFI, Schema Registry, NIFI Registry
  • Strong SQL skills with ability to write intermediate complexity queries
  • Strong understanding of Relational & Dimensional modeling 
  • Experience with GIT code versioning software
  • Experience with REST API and Web Services
  • Good business analyst and requirements gathering/writing skills

Big Data Engineer | Other:

  • Any offer of employment will be contingent on the successful completion of a full background check and drug screen
  • Atlanta based candidates
    • Position is currently hybrid (2 onsite, 3 remote – Flexible schedule)

About Our Client

Our client is one of the nation’s premier transportation companies. They operate approximately 19,500 route miles in 22 states and the District of Columbia, serves every major container port in the eastern United States, and provides efficient connections to other rail carriers. Our client operates the most extensive intermodal network in the East and is a major transporter of coal, automotive, and industrial products.

Apply Now