MyPage is a personalized page based on your interests.The page is customized to help you to find content that matters you the most.

I'm not curious

Senior Hadoop Engineer w/Machine Learning experience

Location Pleasanton, United States
Posted 23-January-2019

Title: Sr. Hadoop Engineer w/Machine Learning experience
Location: Pleasanton, CA
Position Type: Contract
Duration: 12 Months

LOCAL candidates strongly preferred:

Consultant resources shall possess most of the following technical knowledge and experience:

* Provide technical leadership, develop vision, gather requirements and translate client user requirements into technical architecture.
* Strong Hands-on Experience in building, deploying and productionizing ML models using software such as Spark MLLib, TensorFlow, PyTorch, Python Scikit-learn etc. is mandatory
* Ability to evaluate and choose best suited ML algorithms, perform feature engineering and optimize Machine Learning Models is mandatory
* Strong fundamentals in algorithms, data structures, statistics, predictive modeling, & distributed systems is must
* Design and implement an integrated Big Data platform and analytics solution
* Design and implement data collectors to collect and transport data to the Big Data Platform.
* 4+ years of hands-on Development, Deployment and production Support experience in Hadoop environment.
* 4-5 years of programming experience in Java, Scala, Python.
* Proficient in SQL and relational database design and methods for data retrieval.
* Knowledge of NoSQL systems like HBase or Cassandra
* Hands-on experience in Cloudera Distribution 5.x
* Hands-on experience in creating, indexing Solr collections in Solr Cloud environment.
* Hands-on experience building data pipelines using Hadoop components Sqoop, Hive, Pig, Solr, MR, Spark, Spark SQL.
* Must have experience with developing Hive QL, UDF*s for analyzing semi structured/structured datasets.
* Must have experience with Spring framework
* Hands-on experience ingesting and processing various file formats like Avro/Parquet/Sequence Files/Text Files etc.
* Hands-on experience working in Real-Time analytics like Spark/Kafka/Storm
* Experience with Graph Databases like Neo4J, Tiger Graph, Orient DB
* Must have working experience in the data warehousing and Business Intelligence systems.
* Expertise in Unix/Linux environment in writing scripts and schedule/execute jobs.
* Successful track record of building automation scripts/code using Java, Bash, Python etc. and experience in production support issue resolution process.
* Experience with R, Jupyter/Zeppelin

* Strong SQL skills
* Java, Spring, Scala, Cloudera Hadoop, MLLib, Spark, HBase, Neo4j, Solr, Python, Machine Learning.

Thanks and Regards,
Vijay Chavan
Saicon Consultants, Inc.

- provided by Dice Hadoop, Sqoop, Hive, Pig, Solr, MR, Spark, Spark SQL, Java, Scala, Python, NoSQL, HBase or Cassandra, ML models such as Spark MLLib, TensorFlow, PyTorch, Python Scikit-learn, Graph Databases

Awards & Accolades for MyTechLogy
Winner of
Top 100 Asia
Finalist at SiTF Awards 2014 under the category Best Social & Community Product
Finalist at HR Vendor of the Year 2015 Awards under the category Best Learning Management System
Finalist at HR Vendor of the Year 2015 Awards under the category Best Talent Management Software
Hidden Image Url