You will be responsible for designing and optimizing end to end solution, You must have a technical guru when it comes to selecting the right tools for implementing and integration,as well Security, performance, scalability, availability, accessibility, and maintainability are your top priorities when designing solutions. You must have a deep, broad, and hands-on experience in the various technologies.
- 5-7 years’ experience in data warehousing and big data projects.
- Deep and broad experiences in Hadoop ecosystem, including HDFS, MapReduce, Hive, HBase, Impala, Kudu, Solr, etc.
- Hands-on experience in multiple NoSQL databases like Cassandra, MongoDB, Neo4j, ElasticSearch and ELK Stack
- Experience with stream-processing systems: Storm, Spark-Streaming, Flink, etc.
- Experience with real-time messaging platform like KAFKA, Kinesis, etc.
- Advanced working SQL knowledge and experience working with relational databases, query authoring (SQL) as well as working familiarity with a variety of databases, including distributed relational database like SingleStore and Vitess
- Experience in building and optimizing ‘big data’ data pipelines
- Experience in object stores like MINIO and ceph
- Build processes supporting data transformation, data structures, metadata, dependency and workload management.
- Proven record of building highly available and always-on data platforms
- Linux shell scripting
- languages: Python, Java, Scala, etc.
- Fluent in English & Arabic
- Offered Salary:
- Career Level: