BigData Engineer/Developer
Experian
Contract Costa Mesa, California, United States Posted 3 years ago
About Position
BigData Engineer/Developer (Contract)
$70.00 / Hourly
Costa Mesa, California, United States
BigData Engineer/Developer
Contract Costa Mesa, California, United States Posted 3 years ago
Skills
· Hands-on Programing/coding skills in one language preferably in C/C++ JAVA SCALA or PYTHON · Strong Hands-on experience in BigData – Hadoop/Spark · Design develop and implement the Kafka ecosystem by creating a framework for leveraging technologies such as Kafka Connect KStreams/KSQL Attunity Schema Registry and other streaming-oriented technology · Assist in building out the DevOps strategy for hosting and managing our SDP microservice and connector infrastructure in AWS cloud · Knowledge of Hadoop/Spark and various data formats like Parquet CSV etc. · Strong track record of design/implementing big data technologies around Apache Hadoop Kafka streaming No SQL Java/J2EE and distributed computing platforms in large enterprises where scale and complexity have been tackled. · Proven experience participating in agile development projects for enterprise-level systems component design and implementation · Deep understanding and application of enterprise software design for implementation of data services and middleware. · Quick learner working with minimum guidance · Proactive ability to take initiative · Curious learning new technologies and solving problems is critical · Critical thinking analytical skills · Communication and positive mental attitude · Number of years of experience is flexible if the candidate has above mentioned proven hands-on tech skillsDescription
BigData: Hadoop / Spark (preferably with Scala; PySpark okay). Looking for Engineers with hands on experience.
Knowledge and experience with Kafka Streaming, Containerized Micro Services
Knowledge and experience with RDBMS (Aurora MySQL) and No-SQL (Cassandra)
Experience in building Data-pipeline & Data-lakes
Experience in API development
Experience or Knowledge with Data-formats like Parquet, CSV, etc.
Knowledge of Hadoop/Spark and various data formats like Parquet, CSV, etc.
Data-storage architectures like HDFS, HBase, S3 and/or Hive.
AWS-certified Architect and/or Developer Role
AWS Cloud experience – S3, EFS, MSK, ECS, EMR, etc
Data-transformations concepts including Partitioning, Shuffling,
Data-processing constructs like Joins, MapReduce
Exposure to working with cloud infra like AWS, Azure
Experienced Java Programmer.
Great attention to detail
Organizational skills
An analytical mind
Responsibilities
- · 5+ years of experience in relevant Streaming/Queueing implementation roles
- · Bachelor's degree in Technical discipline; Masters preferred
- · Experience in monitoring the health of Kafka cluster (data loss and data lagging) and strategy for short TTD (time to detect) of broker failure and fast TTR (time to recover)
- · Strong coder who can implement Kafka producers and consumers in various programming languages following the common patterns and best practices
- · Experience in various integration with Kafka such as Elastic Search, Databases (RDBMS or NoSQL)
- · Experience in Spark stream processing is a plus
- · AWS-certified Architect and/or Developer Role
- · Experience in RDBMS change log streaming is a plus
- · Systems integration experience, including design and development of APIs, Adapters, and Connectors and Integration with Hadoop/HDFS, Real-Time Systems, Data Warehouses, and Analytics solutions.
- · Financial Industry experience preferred
By applying to a job using PingJob.com you are agreeing to comply with and be subject to the PingJob.com Terms and Conditions for use of our website. To use our website, you must agree with the Terms and Conditions and both meet and comply with their provisions.