From 15236 reviews, clients rate our
4.74 out of 5 stars.
Job description Key Responsibilities • Understanding Business Requirements and Functional Requirements provided by Architect/internal stakeholders to develop & deploy the Spark and Java -based solutions • Design and development of applications /Application features well within timelines, conforming to the quality standards/best practices and expectations throughout the development life cycle in Java, Spark, and Big Data technologies • Owns the assigned components/product features, develops the solution or feature implementation and responsible for unit Testing • Participate in Design and System Testing activities. • Manage data at scale using relational databases and SQL • Build best in class ETL based solutions for effective data ingestion and transfo...
Job Responsibilities: The Analyst will work with lead analysts to deliver analytics by a. Building analytics products for to deliver automated, scaled insights in self-serve manner (on PBI/Tableau platform) b. Assisting with complex data pulls and data manipulation to develop Analytics dashboards or conduct analytics deep dives c. Scaling current efforts to productize Analytics delivery by implementing “out of the box” solutions to deliver insights and recommend design enhancements on existing products (Familiarity with AI visuals on PBI will help the cause) Requirements & Qualifications: • 4-8 years of experience in Analytics • Strong logical, analytical, and problem-solving skills • Good understanding of digital and data analytics &bul...
You are required to setup a multinode environment consisting of a master node and multiple worker nodes. You are also required to setup a client program that communicates with the nodes based on the types of operations requested by the user. The types of operations that expected for this project are: WRITE: Given an input file, split it into multiple partitions and store it across multiple worker nodes. READ: Given a file name, read the different partitions from different workers and display it to the user. MAP-REDUCE - Given an input file, a mapper file and a reducer file, execute a MapReduce Job on the cluster.
I am working on a YouTube dataset where I have to get 5 insight using spark, hive ,HDFS and Elastic search