Arman Zharmagambetov
2024
To the Globe (TTG): Towards Language-Driven Guaranteed Travel Planning
Da Ju
|
Song Jiang
|
Andrew Cohen
|
Aaron Foss
|
Sasha Mitts
|
Arman Zharmagambetov
|
Brandon Amos
|
Xian Li
|
Justine T Kao
|
Maryam Fazel-Zarandi
|
Yuandong Tian
Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing: System Demonstrations
Travel planning is a challenging and time-consuming task that aims to find an itinerary which satisfies multiple, interdependent constraints regarding flights, accommodations, attractions, and other travel arrangements. In this paper, we propose To the Globe (TTG), a real-time demo system that takes natural language requests from users, translates it to symbolic form via a fine-tuned Large Language Model, and produces optimal travel itineraries with Mixed Integer Linear Programming solvers. The overall system takes ~5seconds to reply to the user request with guaranteed itineraries. To train TTG, we develop a synthetic data pipeline that generates userrequests, flight and hotel information in symbolic form without human annotations, based on the statistics of real-world datasets, and fine-tune an LLM to translate NL user requests to their symbolic form, which is sent to the symbolic solver to compute optimal itineraries. Our NL-symbolic translation achieves ~91% exact match in a backtranslation metric (i.e., whether the estimated symbolic form of generated natural language matches the groundtruth), and its returned itineraries have a ratio of 0.979 compared to the optimal cost of the ground truth user request. When evaluated by users, TTG achieves consistently high Net Promoter Scores (NPS) of 35-40% on generated itinerary.
2021
Softmax Tree: An Accurate, Fast Classifier When the Number of Classes Is Large
Arman Zharmagambetov
|
Magzhan Gabidolla
|
Miguel A. Carreira-Perpinan
Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing
Classification problems having thousands or more classes naturally occur in NLP, for example language models or document classification. A softmax or one-vs-all classifier naturally handles many classes, but it is very slow at inference time, because every class score must be calculated to find the top class. We propose the “softmax tree”, consisting of a binary tree having sparse hyperplanes at the decision nodes (which make hard, not soft, decisions) and small softmax classifiers at the leaves. This is much faster at inference because the input instance follows a single path to a leaf (whose length is logarithmic on the number of leaves) and the softmax classifier at each leaf operates on a small subset of the classes. Although learning accurate tree-based models has proven difficult in the past, we are able to overcome this by using a variation of a recent algorithm, tree alternating optimization (TAO). Compared to a softmax and other classifiers, the resulting softmax trees are both more accurate in prediction and faster in inference, as shown in NLP problems having from one thousand to one hundred thousand classes.
Search
Co-authors
- Da Ju 1
- Song Jiang 1
- Andrew Cohen 1
- Aaron Foss 1
- Sasha Mitts 1
- show all...