Job Description
Introducing Match Group AI
Match Group AI is contributing to various products of Match Group, including Tinder and Hinge, by utilizing Hyperconnect's AI technology. It also innovates user experience by finding and solving problems that are difficult to approach with existing technology but can be solved with machine learning technology. To this end, it is creating tools that help users better express themselves and developing new features that can provide a more satisfying experience in the process of finding meaningful encounters.
Introducing Match Group AI ML Team
The ML Team actively utilizes AI technologies in various fields such as natural language processing, image analysis, and recommendation systems in mobile and server environments to conduct research and development to provide users with a better experience.
To this end, we are looking for people who can work together to solve the following problems:
- The problem of effectively utilizing multi-modal data
- Domain adaptation problem to overcome differences between data collected from different domains
- Problems with multi-task or multi-label classification modeling
- Issues regarding text and image summarization and evaluation methods
- Diversity and long-tail problems that occur in recommendation systems
- Developing new features using large language models or vision-language models and the problem of learning, tuning, and serving large-scale models for them.
MG AI is also making steady efforts to research AI technologies that can be incorporated into its products.
We are looking for people who can quickly assess the feasibility of a technology through various prototyping and find the following methods to build an AI flywheel that leads to continuous improvement and growth of the system after commercialization.
- How to deal with highly imbalanced or noisy label data
- A meta learning method that can respond to situations where model requirements are constantly changing or initial data is insufficient.
- Lightweight models and optimizations that achieve high performance while maintaining low latency in mobile environments
- Modeling, optimization, and distillation methods that can learn large-scale models and reliably process hundreds or thousands of inputs per second in real service environments.
- Continual/life-long learning method that can continuously improve existing deployed models
Introducing ML Engineer
ML Engineers need to have the research capabilities of a scientist who studies and improves cutting-edge models, and the development capabilities of an engineer who takes the time/space complexity of the created model to the limit to maximize inference performance.
Based on these capabilities, we perform various tasks such as discovering/defining problems encountered in actual services, reproducing or developing SotA models for problem solving, deploying models to on-device and server environments, and building AI flywheels that continuously monitor and improve models. In this process, we actively collaborate and receive help from various specialized organizations such as backend/frontend/DevOps engineers, data analysts, and PMs.
For a more detailed look at what it's like to work, see:
- AI in Social Discovery(Blending Research and Production)
- [How AI Lab Works] Interview with Head of AI - Shurain
Organizing the research results and publishing them in the form of papers or code is also one of the team's goals.
When creating a machine learning model for use in a product, there are many cases where existing research is insufficient. To fill in the gaps, the project participants all work together to organize the meaningful parts of the research and, if possible, disclose them along with the code. As a result, the following external research results have been achieved so far.
- 2024 CUPID: Real-time Session-based Mutual Recommendation System for 1:1 Social Discovery Platform ICDM Workshop Presentation
- 2023 TiDAL: Active Learning Techniques Based on Model Behavior for Efficient Learning Processes ICCV 2023 Publication
- Study on setting thresholds to simultaneously satisfy multiple classification criteria in a 2023 moderation environment WSDM 2023 publication
- Published in EMNLP 2022, a study on increasing semantic diversity in conversation generation in 2022
- How to learn effectively in environments with high label noise in 2022 ECCV 2022
- NAACL 2022: Study on Chatbots that Imitate Target Characters Using Only a Few Speeches of the Target Character in 2022
- Presentation at the ACL 2022 Workshop on improving performance using examples in conversation generation models in 2022
- ICASSP publication on distillation technology for audio classification in mobile environment in 2022
- Feature normalization study that preserves importance for click-through rate prediction in 2021 ICDM Workshop Best Paper Award
- Presentation of ICLR 2021 Workshop on Efficient Click-Through Rate Prediction Model Based on Tabular Learning in 2021
- Study on the Use of Large-Scale Generative Models for Efficient Retriever-Based Chatbots in 2021 Published in EMNLP 2021
- Technology to solve the 2020 Long-tailed Visual Recognition problem from the perspective of label distribution shift, published in CVPR 2021
- Published in INTERSPEECH 2020 Text-to-Speech (TTS) technology through 2020 Fewshot Learning
- 2019 Facial Reconstruction Technology through Fewshot Running AAAI 2020 Publication
- Published in INTERSPEECH 2019: A Keyword Spotting Model That Works Fast on Mobile (TC-ResNet)
- Upload of lightweight image segmentation model (MMNet) archive optimized for mobile environment in 2019
- 2nd place in the 2018 Low Power Image Recognition Competition (LPIRC)
In order for ML research to proceed well, the infrastructure for deep learning training must also be well-equipped.
At Hyperconnect, we have built and are utilizing our own deep learning research cluster to ensure that ML Engineers can sufficiently develop models and conduct experiments. We can use a cluster consisting of 20 DGX-A100s (a total of 160 A100 GPUs) , and various on-premise equipment, including , for research and development. In addition, we are building and operating our own data pipeline, including data collection and preprocessing, using cloud services. We are also working with various software engineers (backend/frontend/DevOps/MLSE) who help us commercialize ML models.
Required Qualifications
- Someone with a general understanding of the AI/ML domain and in-depth knowledge of at least one specific domain, and at least 3 years of related project experience
- People who are interested in the service of AI technology
- Through Exploratory Data Analysis (EDA), you can discover statistical characteristics and patterns in data and reflect them in ML models.
- Those who have sufficient Python development capabilities, including development capabilities based on open source frameworks such as Tensorflow, PyTorch, CatBoost, and JAX
- Ability to read papers that have not been publicly released and implement them quickly and accurately
- Anyone with experience in modeling to improve the test performance of a model using publicly available benchmark data sets
- Someone with the engineering capabilities needed to train ML models and deploy them to services.
- Anyone with a degree or nationality that can communicate fluently in Korean is welcome.
Preferred Qualifications
- Those who have published in top-tier machine learning conferences and journals (NeurIPS, ICLR, ICML, CVPR, ICCV/ECCV, KDD, etc.) or have won awards in AI-related competitions
- Someone with an overall understanding of the AI/ML domain
- Anyone with experience integrating AI technology into real-world services and significantly improving key metrics
- Anyone with experience participating in open source development related to machine learning
- Someone with extensive development experience outside of AI/ML, including client (Android, iOS) and backend
- Someone who has experience in planning A/B test experiments, defining target KPI indicators, and conducting SQL-based data analysis
- Experience with automating machine learning workflows (AutoML, hyperparameter optimization, data and learning pipeline configuration, etc.)
- Someone who can communicate fluently in English
$ - $ a year
None
If any of the information you submit contains false information or if you are disqualified from providing labor under relevant laws, your employment may be cancelled. If necessary, additional screening and document verification may be conducted in addition to the previously announced hiring procedures.
If you apply for a position with Hyperconnect, this Privacy Policy applies to the processing of your personal information: https://career.hyperconnect.com/privacy
Company Info.
Match Group
Match Group is an American internet and technology company headquartered in Dallas, Texas. It owns and operates the largest global portfolio of popular online dating services including Tinder, Match.com, Meetic, OkCupid, Hinge, PlentyOfFish and OurTime, among a total over 45 global dating companies. The company was owned by IAC until July 2020 when Match Group was spun off as a separate, public company.
-
Industry
Social media Company
-
No. of Employees
1,880
-
Location
Dallas, TX, USA
-
Website
-
Jobs Posted