Average salary: Rs1,150,000 /yearly
More statsGet new jobs by email
- ...EngineerExperience : 4 to 6 yearsLocation : ChennaiMust Have :- Databricks- Spark- Snowflake Schema- AzureRole And Responsibilities :- Develop,... ...and infrastructure, with a strong focus on Databricks and the Apache Spark framework.- Demonstrate proficiency in Python and other...Suggested
- ...mechanisms to ensure data integrity and system resilience within Spark jobs. Optimize PySpark jobs for performance, including partitioning... ...practices in a big data environment. ~ Proficiency in PySpark, Apache Spark, and related big data technologies for data processing,...SuggestedPermanent employmentFull time
- ...Job description: Experience: Minimum 5+ years . - Skills: Java, Spring Boot, Apache Camel, Kafka, JUnit, Spring Security (ADFS), basic Kubernetes. - Expectations: Should be able to work independently and handle client-facing roles. Technical Skill Requirements Core...Suggested
- ...responsible for the installation, configuration, maintenance, and performance of critical enterprise systems including Linux servers , Apache Server , and Oracle WebLogic . The ideal candidate will have strong scripting abilities and experience with writing SQL queries to...Suggested
- ...Responsibilities: - Development of applications on Java, spring core, Apache Camel, batch execution, development on REST, web services and REST API. Development of application on stored procedures, PL/SQL, Views. Database Design and optimization of SQL queries, best...Suggested
- ...Chain Analytics- Strong problem-solving and quantitative skills.- Experience working with large datasets and distributed computing tools (e.g., Spark, Hadoop) is a plus.- Familiarity with data visualization tools (e.g., Tableau, Power BI, matplotlib, seaborn). (ref:hirist.tech)SuggestedHybrid workImmediate startShift work
- ...and resolve issues related to data processing and storage.Key Skills & Competencies :- Strong knowledge of big data frameworks (Hadoop, Spark, Flink, Kafka).- Hands-on experience with cloud platforms (AWS, Azure, GCP).- Proficiency in SQL, NoSQL databases (MongoDB, Cassandra,...Suggested
- ...infrastructure.You are a successful candidate if you have :- strong experience with data engineering tools and frameworks (e.g., Python, SQL, Spark, Airflow).- Hands-on experience with cloud data platforms (AWS, Azure, or GCP).- Familiarity with federated data architectures and...Suggested
- ...production rollout and infrastructure configuration • Demonstrable experience of successfully delivering big data projects using Kafka, Spark • Exposure working on NoSQL Databases such as Cassandra, HBase, DynamoDB, and Elastic Search • Experience working with PCI Data...SuggestedLong term contract
- ...tasker with an eye for details MS Office skills – should be proficient in MS PowerPoint, MS Excel, and MS Word About Avendus Spark : Avendus Spark Institutional Equities is India’s leading Domestic institutional brokerage house. Trusted by 400+ institutional...Suggested
- ...Snowflake for data ingestion and processing.- Understand and apply PySpark best practices and performance tuning techniques.- Experience with Spark architecture and its components (e.g., Spark Core, Spark SQL, DataFrames).ETL & Data Warehousing : - Apply strong understanding of ETL...Suggested
- ...required ~5+ years of industry experience with hands-on experience in Apache technologies ~ Expertise in Python with knowledge of the Django... ...Science or a related technical discipline Familiarity with software engineering skills including Java, Spark, Python, Scala, etc....SuggestedFull time
- ..., data scientists, and business stakeholders to gather requirements and deliver high-quality solutions. Architect and optimize Apache Spark workloads for performance, cost, and reliability. Ensure best practices in data modeling, governance, security, and compliance...Suggested
- ...manipulation functions such as regular expressions (regex) Prior exposure to data analysis in a tabular form, for example with Pandas or Apache Spark/Databricks Knowledge of basic statistics relevant to data science eg. precision, recall, F-score Knowledge of visualization...Suggested
- ...experience with RDBMS and NoSQL databases, excellent communication and presentation skills Preferred Skills: Experience with BDD, Apache Spark, C#, Jira, GitLab, Confluence, Docker and Kubernetes, understanding of CI/CD processes with GitLab Experience: Minimum of 3 - 6...Suggested
- ..., EKS Proven experience in: Java, Working experiencewith: AWS Athena and Glue Pyspark, EMR,DynamoDB, Redshift, Kinesis, Lambda, Apache Spark, Databricks on AWS, Snowflakeon AWS Proficient in AWS Redshift, S3, Glue, Athena, DynamoDB AWS Certification: AWS Certified SolutionsArchitect...Permanent employmentFull time
- ...implementation. Incumbents usually require expert knowledge in Databricks, Spark, SQL etc. JOB RESPONSIBILITIES Design and propose end-to-end... ...years of in-depth experience developing data pipelines within an Apache Spark environment (preferably Databricks). 2+ years of active...
- ...solutions that are forward-thinking in data integration Programming experience in Scala or Python, SQL Working experience in Apache Spark is highly preferred Familiarity with some of these AWS and Azure Services like S3, ADLS Gen2, AWS, Redshift, AWS Glue, Azure Data...
- ...with building and maintaining a cloud system. Familiarity with databases like DB2 and Teradata. Strong working knowledge in Apache Spark, Apache Kafka, Hadoop and MapReduce. Strong troubleshooting skills and ability to design for scalability and flexibility. Expertise...
- ...projects, and data lake initiatives using technologies like AWS, Kafka, Spark, SQL, and Python. Experience with EDW modeling and architecture... ..., or data architecture roles. ● Proficiency in SQL, Python, Apache Spark, and Kafka. ● Strong hands-on experience with AWS data...
- ...e.g., Oracle, MySQL, PostgreSQL, SQL Server). Experience with ETL tools and technologies (e.g., Informatica, Talend, DataStage, Apache Spark). Hands-on experience in developing and maintaining test automation frameworks. Proficiency in at least one programming language...Work at officeShift workWeekend work
- ...At least 2 years implementing solutions using AWS services such as Lambda, AWS Athena and Glue AWS S3, Redshift, Kinesis, Lambda, Apache Spark, Experience working with data warehousing data lakes or Lakehouse concepts on AWS Experience implementing batch processing using...
- ...orchestration workflows using industry-standard orchestration tools (e.g., Apache Airflow, Prefect).. Apply a strong understanding of major data... ..., Docker, Visual Studio, Pycharm, Redis, Kubernetes, Fivetran, Spark, Dataflow, Dataproc, EMR Experience with database monitoring...
- ...Strong SQL skills with the ability to write complex queries. Big Data: Experience with Spark and Hive, including optimization techniques. Data Orchestration: Expertise in Apache Airflow or equivalent tools. Data Lake Development: Hands-on experience in creating...
- ...solutions (Snowflake, Redshift, Databricks, Fabric, AWS Glue, Azure Data Factory, Synapse, Matillion,DBT ). ~ Proficient in SQL, Apache Spark / Python programming languages. ~ Good to have skills includes Data visualization using Power BI, Tableau, or Looker, and familiarity...JanuaryWork at office
- ...integration. Proficiency in data pipeline orchestration tools (e.g., Apache NiFi, Apache Airflow). Strong knowledge of databases (SQL and... .... Familiarity with data processing frameworks (e.g., Hadoop, Spark) and cloud-based data services (e.g., AWS, Azure, GCP)....Saturday
- ...problems. Develop high performance & low latency components to run Spark clusters. Interpreting functional requirements into design... ...~ Experience in Big data technologies like HDFS, Hive. HBASE, Apache Spark & Kafka Experience in building self-service platform agnostic...
- ...TensorFlow, PyTorch, Scikit-learn, Keras, Hugging Face Transformers, and Spark MLlib.- Databricks Ecosystem: Proficient in Databricks Notebooks,... ...and Lakehouse AI.- Data Engineering & Big Data: Experience with Apache Spark, Hadoop, Kafka, Airflow, dbt, and building scalable ETL...
- ...with business needs.Key Result Areas and Activities :- Design and implement Lakehouse architectures using Databricks, Delta Lake, and Apache Spark.- Lead the development of data pipelines, ETL/ELT processes, and data integration strategies.- Collaborate with business and...Full time
- ...ETL/ELT pipelines for ingesting and transforming large datasets.- Develop both batch and streaming data pipelines using tools like Apache Spark, Kafka, Flink, etc.- Build and maintain data APIs and microservices to support analytics and reporting needs.- Work with structured...Full timeWork at office