ارسال درخواست برای این موقعیتLinkedin logoارسال درخواست از طریق LinkedIn

COVID-19 notice: Digikala is continuously monitoring the recent update on coronavirus news to embrace our corporate social responsibility and promote the safety of our teams and the ones who considering Digikala as a place to fulfill their career goals. We have provided flexible interview process until further notice.

Technology
DataOps Engineer
Tehran, Iran, Islamic Republic of
Job Description

The DataOps engineer maintain and prepare data processing infrastructure to support large and complex use cases throughout the enterprise. The person in this role, creates scalable and reusable solutions for gathering, collecting, storing, processing, and serving data on both large and very large (i.e. Big Data) scales. These solutions can include solutions in any of the following domains: ETL, business intelligence, analytics, persistence (relational, NoSQL, data lakes), search, data warehousing, stream processing, and machine learning.

Role Responsibilities

  • Assists in the development of large-scale data structures and pipelines to organize, collect and standardize data that helps generate insights and addresses reporting needs.
  • Applies understanding of key business drivers to accomplish own work.
  • Writes ETL (Extract / Transform / Load) processes, designs database systems and develops tools for real-time and offline analytic processing.
  • Integrates data from a variety of sources, assuring that they adhere to data quality and accessibility standards.
  • Plan and work on internal projects as needed, including legacy system replacement, monitoring and analytics improvements, tool development, and technical documentation.
Requirements

    • Ability to perform deep-dive technical troubleshooting in critical situations.
    • Be a team player, disciplined, and have great attention to details
    • Strong communications skill with both technical and non-technical peers.
    • Ability to juggle multiple tasks at once
    • Interested in distributed and highly available systems
    • Ability to leverage multiple tools and programming languages to analyze and manipulate data sets from disparate data sources.
    • Ability to understand complex systems and solve challenging analytical problems.
    • Experience building data transformation and processing solutions.
    • Expert in Hadoop architecture
    • Expert in Batch and Stream Processing system (Like Spark, Flink, Airflow, …)
    • Good understanding of Linux-based OS
    • Familiar with data pipeline and data analyzing ecosystems
    • Strong scripting skills with the preferred language of Python and Java
    • Familiar with Configuration Management and CI/CD pipelines/tools
    • Familiar with concepts and using of Containers like Docker
    • Familiar distributed systems
Apply for This Position