Ihar Nestsiarenia's home page
Full name: Ihar Nestsiarenia Current role: Machine Learning Engineer / Head of Machine Learning EPAM LT/LV
LinkedIn: https://www.linkedin.com/in/nesterione/
Experienced Machine Learning Engineer with over 10 years of leading ML projects from concept to production. Specializing in Natural Language Processing, Document processing, and Search technologies. Proficient in the full spectrum of NLP—from statistical models to deep learning and Large Language Models. Skilled at rapid prototyping and building scalable solutions from scratch to validate business ideas.
Roles have included Staff Engineer, Machine Learning Engineer, and AI Architect, with a current focus on technical implementation and system design. I excel in agile, compact teams, efficiently validating ideas and delivering production-ready solutions.
Open to consulting and mentoring opportunities, passionate about sharing knowledge and helping others grow in the AI field.
Domains: NLP, LLMs, Information Search, LegalTech, Information Extraction, OCR, Document Processing, Prototyping, Rapid Development, MLOps, LLMOps
Technologies: Python, Java, TensorFlow, PyTorch, Elasticsearch, Solr, LangChain, LangGraph, Argo, Langfuse, SageMaker, AWS, GCP, Azure, OpenAI API, GPT-4, Vector Databases (Qdrant, Chroma), MLOps Practices, CI/CD Pipelines, Git, DVC, Kubernetes, Docker
Achievements:
Technologies: Kubernetes, vLLM, GCP, Qdrant, Chroma, LangGraph, LangChain, Argo, Langfuse
Achievements:
Technologies: Elasticsearch, Chroma, Azure, Qdrant, LangGraph, LangChain, Argo, Langfuse
Summary:
Achievements:
Technologies: AWS SageMaker, AWS Glue, AWS Textract, GPT-4, Python, OpenAI API
Achievements:
Technologies: Azure, Kubernetes, Keda, Elasticsearch, DEPS, PyTorch, ChatGPT, GitHub Copilot, AWS Textract, PostgreSQL
Achievements:
Technologies: TensorFlow, Keras, scikit-learn, spaCy, BM25, Word2Vec, Solr, DVC, SageMaker, Python, Docker
Achievements:
Technologies:
Achievements:
Technologies:
Achievements:
Technologies:
Tensorflow, Keras, scikit-learn, bigARTM, NLTK, spaCy, Fasttext, BM25, word2vec, gensim, Matplotlib, Jupyter Notebook, Node.js, Java, XSLT, and Elasticsearch
Achievements:
Technologies:
Elasticsearch, graph databases, Semantic Web, RDF/OWL, SPARQL Jena-TDB, JBoss Fuse, Tomcat, Apache Fuseki, ELK, Kibana, Sonar, FindBugs, PMD, Checkstyle, Winscp, Bamboo CI, Git, JIRA, Maven, Ant, Linux, Cron, Java 7, Java EE, ActiveMQ, Spring 3, REST, SOAP, Tomcat, JBoss Fuse, Jena, XSLT, RDF, XML, XPath, Semantic Web, JUnit, Cucumber, Camel, Blueprint, Log4J, Lombok.
I was a primary instructor and contributor to the course “Fundamentals of Intelligent Data Analysis,” where I conducted lectures and exercises. The course covered various aspects of data mining, with a focus on natural language processing. It included fundamental theoretical knowledge about machine learning and practical-oriented exercises using modern NLP and ML tools.
Central topics discussed during the course were: fundamentals of machine learning; model performance evaluation, metrics, cross-validation; NLP basics, data cleaning, preprocessing, lemmatization, and stemming; developing the pipeline for text classification problem; neural networks for textual analysis.
Summary: As a co-founder, I played multiple roles including Product Manager, backend developer, architect, and DevOps, in the development of a product for small and near-to-middle retail businesses.
The product provided client base management with a loyalty system, accounting of purchases, and worker’s KPI monitoring. The project underwent several pivots, and ultimately resulted in the implementation of a CRM with personal analytics for small businesses.
Our solution is easy to integrate, allowing business owners to start using it within minutes and start tracking their sales, collecting client base and managing loyalty program for them. It offers detailed information about the client base and transaction history, simplifies communication with clients, and provides advanced analytics on demand. The product has a Monthly Active User (MAU) of 3,000.
Team size: 6
Technologies:
Flask, Reactjs, Kotlin, Swift, JUnit, Python, Vue.js, Linux, Docker, pytest, gitlab ci, Docker/docker-compose, git, maven, Gitlab CI, bash, trello, miro, notion, MongoDb, java 8, Spring Boot, Loki, grafana
Summary: The product was designed to solve several problems, including the recommendation of related products, which increased sales and enhanced client loyalty, and semantic search, which enabled search by symptoms and provided useful guides, as well as smart filtering for specific clients, such as pregnant women, lactating women, or children. The product also offered recommendations alternatives based on the official anthology of medicines. The project was partially acquired.
Team size: 8
Responsibilities: As a co-founder, my responsibilities included serving as a Product Manager, Technical Leader, where I organized the SDLC process, controlled code quality, configured CI/CD, and automated processes, Backend developer where I implemented the search system including fuzzy matching, ranking, and search by anthology graph, and developed a recommendation system based on rules, and finally, I was responsible for OPS, server configuration, proxies, and deployment management.
Technologies: Java 8, Spring MVC, Spring Security, Spring Data, Spring Boot, JSP, For admin-panel was used AngularJS. JPA/Hibernate, QueryDSL, Linux, Docker, JUnit / Spring Test Framework, docker-compose, git, Gradle, maven, Gitlab CI, Jenkins CI, bash, Python for data processing and aggregating from different sources, trello/gitlab issue tracker, Fiddler, MySQL (Used JPA/Hibernate, Spring Data)
As a course developer and trainer, I was responsible for creating and delivering a comprehensive Java programming course from scratch. The curriculum consisted of two sections: Basic Java and Java Enterprise Edition (Java EE). I achieved high student retention and more than half of the students successfully obtained employment in the IT industry.
As a co-founder, I was responsible for technical product management, DevOPS and backend development of an aggregation service for collecting advertisements for long-term rent. The service collected advertisements, deduplicated, and normalized them, and provided a search function with ranking, filtering, and sorting results. The project was closed as we were not able to find a suitable match between the product and the market.
The project included various components such as crawlers that ran on a schedule, a deduplication system that used fuzzy rules to detect the same advertisements from various sources, a RESTful API, a web application developed in React.js, and a server-rendering component based on PhantomJS that allowed us to create pre-rendered versions of the pages to help search crawlers index all pages of our app.
Team size: 2
Technologies: Java, Spring boot, React.js, PaaS OpenShift, maven, git, jsoup, mongodb, ODM morphia, docker, docker-compose, Linux, PhantomJS, Web Crawling, Search Engine Ranking, REST APIs, Product Management, DevOps
During my tenure, I contributed to core courses offered by the department. My responsibilities included:
I am a seasoned trainer and advisor with experience in a variety of disciplines including Java, Python, and Machine Learning. I have also been actively involved in mentoring activities, providing guidance and direction to individuals and teams.
Department: 05.13.05 Elements and devices of computer technology and control systems
Research domain: computer vision, object detection, vehicles tracking, traffic-light management, optimization
Summary: During my research, I focused on the integration of intelligent analytics and monitoring systems for road traffic.
My research centered on extracting information from video streams and developing optimization models to reduce traffic load. In a city where all roads and traffic lights are aware of road conditions in real time, the system could dynamically change traffic light regimes to normalize traffic flow.
I worked on optimization models that could be deployed on single-board computers. I developed a prototype for car tracking using OpenCV and deep neural networks, and presented my results at several international conferences, and published several papers on the topic.
Department: Mathematical modeling, numerical methods and program complexes
Diploma score: 10 (from 10)
Summary: As a continuation of my undergraduate studies, I conducted research on the modeling of dynamic transient processes using the Finite Element Method. My research resulted in the development of a modeling application that consisted of the following key components:
Department: Computer Engineering and Design
Diploma score: 5 (from 5)
Summary: I acquired knowledge of various approaches for designing web applications and gained proficiency in working with vector and raster graphics, including the basics of design. My final project involved the implementation of a 3D web editor using WebGL (Three.js) and Angular.js.
Department: Information Technology
Diploma score: 10 **(from 10)
Summary: During my student years, I gained research experience and participated in multiple conferences. I also presented a thesis related to my final project, which focused on the math modeling of transients processes. This project consisted of two parts: the application of the finite element method to the problem of heating a metal plate under a load, and the development of a software application with a visual editor, experiment management system, and custom math solver for solving problems.