meon this domain
- Same as e-mail
What can I do
- Work with Scala code and cats/zio ecosystem solutions;
- write tests, clearly document code and read documentation as well;
- use web frameworks and solutions for them;
- design backends for high load and ETLs for big data;
- choose appropriate tools to solve a problem;
- organize work in a team of colleagues to create a product;
- use VCS effectively, including understanding the git-flow model;
- work with continuous integration systems;
- understand frontend and modern JS (although I don’t like to do this);
- databases, RDBMS and noSQL, key-value storages;
- large web portals;
- Apache stack of big data solutions, including cloud solutions;
- third-party API integration, including payment systems and banks;
- search engines;
- legacy code;
- cloud infrastructures;
- architecture and design of distributed applications;
- actor-bases systems.
What I want to do
- develop and implement effective solutions to business problems;
- see how my solutions are delivered to production, creating value for customers;
- write effective and clean code;
- work in the company of colleagues, who are smarter and more experienced than me.
What I do NOT like
- so-called «body shops»;
- inadequate management;
- Pragmatist, passionate about what things I like to do.
- I like to learn new things, and use my knowledge to work on projects.
- I possess good imagination and memory, and can explain my ideas to others.
Supported distrubuted backend based on actor architecture (Akka and Akka-Persistence were used). Created microservice for closed parkings.
Integrated some payment systems (local one and Stripe).
- Java, Scala
- Akka, Play, Tapir
- GCE, k8s
Creating architecture for DataLake, optimizing ETL processes.
- http4s, Play, ZIO, cats, KillBill
- Google Dataproc, Google Composer
2020 — fintech
Writing and supporting integrations with banks for KillBill and internal payment gateway.
- Scala, Java
- http4s, sttp, Play, ZIO, cats, Akka, Lagom (a bit of)
- KillBill, Google Cloud Engine
2017-2020 — data engineering
Building internal user analytics processing architecture, developing tools, libraries (twilight, sparkle, cogwheel, glimmer) and documentation, running A/B tests.
Writing and maintaining tracking tools (Clerk), training and coaching for other teams.
Supporting the internal Apache Spark fork, deploying and partially supporting Hadoop cluster. Unification of all ETL processes on a single platform (AirFlow) and its tweaks to the company’s needs.
- Scala, Java, Python/Rust (a bit of)
- http4s, cats, Spark API, sbt
- Vagga, Lithos, Apache Spark, YARN, HDFS, Kafka, AirFlow, Jupyter Notebook
2014-2017 — backend
Mentorship in EVO Summer Python Lab.
- Python, JS
- aiohttp, Pylons (fork), GraphQL, rx.js
- Docker, PostgreSQL, MySQL, Kafka, ElasticSearch, Redis, webpack
Order tracking system design and development, backend optimization. Implementing CI/CD to deploy and test the project, monolith refactoring.
Developing architecture for collecting user metrics, implementing it in Scala for better performance.
- Scala, Python
- Play, Flask, React.js
- Mercurial, Docker, Redis (Cluster), AWS
Integration of helpdesk with Yandex internal services.
- Python, JS
- Git, REST, Ansible
2011-2012: National Aviation University
Support for lib.nau.edu.ua, administration of the library network segment, servers and related services. Backend and classifier optimization for library catalog, speeding up book searches.