Apply for Senior Big Data Engineer (Scala/Spark)

    August 6th, 2020

    Kyiv, Kharkiv, Uzhhorod, Lviv, Dnipro, Remote

    Required skills

    • 5+ years of professional experience
    • Solid experience with Scala and Spark
    • Experience with AWS
    • Great coding skills and software development experience 
    • Level of English: Upper-Intermediate
    • Watched all seasons of “Rick and Morty” 


    • Perform tasks related to data migration
    • Work under a close supervision of Lead Big Data Engineer and help other engineers to execute the compute migration.

    We offer

    • High compensation according to your technical skills
    • Long-term projects (12m+) with great Customer
    • 5-day working week, 8-hour working day, flexible schedule
    • Democratic management style & friendly environment
    • WFH option (Possibility to work from home)
    • Annual Paid vacation — 15 b/days + unpaid vacation
    • Paid sick leaves — 6 b/days per year
    • Ukrainian official holidays
    • Corporate Perks (external training, English courses, corporate events/team buildings)
    • Cozy office in the center of the city
    • Coffee, cookies and other goodies
    • Professional and personal growth

    Project description

    Client is an American e-book and audiobook subscription service that includes one million titles. Platform hosts 60 million documents on its open publishing platform.
    The platform allows:

    • anyone to share his/her ideas with the world;
    • access to audio books;
    • access to world’s composers who publish their music;
    • incorporates articles from private publishers and world magazines;
    • allows access to exclusive content.

    Core Platform provides robust and foundational software, increasing operational excellence to scale apps and data. We are focused on building, testing, deploying apps and infrastructure which will help other teams rapidly scale, inter-operate, integrate with real-time data, and incorporate machine learning into their products. Working with our customers in the Data Science and Content Engineering, and our peers in Internal Tools and Infrastructure teams we bring systems-level visibility and focus to our projects.

    We will develop and operate standards and infrastructure for RPC, service discovery, and data ingestion.

    Client’s goal is not total architectural or design perfection, but rather choosing the right trade-offs to strike a balance between speed, quality, and cost.