of the AI growth, the tempo of technological iteration has reached an unprecedented degree. Earlier obstacles now appear to have viable options. This text serves as an “NMT 101” information. Whereas introducing our mission, it additionally walks readers step-by-step by means of the method of fine-tuning an present translation mannequin to assist a low-resource language that’s not included in mainstream multilingual fashions.
Background: Dongxiang as a Low-Useful resource Language
Dongxiang is a minority language spoken in China’s Gansu Province and is assessed as weak by the UNESCO Atlas of the World’s Languages in Hazard. Regardless of being extensively spoken in native communities, Dongxiang lacks the institutional and digital assist loved by high-resource languages. Earlier than diving into the coaching pipeline, it helps to briefly perceive the language itself. Dongxiang, as its title suggests, is the mom tongue of the Dongxiang folks. Descended from Central Asian teams who migrated to Gansu through the Yuan dynasty, the Dongxiang group has linguistic roots carefully tied to Center Mongol. From a writing-system perspective, Dongxiang has undergone a comparatively current standardization. Because the Nineties, with governmental promotion, the language has progressively adopted an official Latin-based orthography, utilizing the 26 letters of the English alphabet and delimiting phrases by whitespace.

Though it’s nonetheless categorized underneath the Mongolic language household, because of the extended coexistence with Mandarin-speaking communities by means of historical past, the language has a trove of lexical borrowing from Chinese language (Mandarin). Dongxiang displays no overt tense inflection or grammatical gender, which can be a bonus to simplify our mannequin coaching.

Additional background on the Dongxiang language and its audio system might be discovered on our web site, which hosts an official English-language introduction launched by the Chinese language authorities.
Our Mannequin: Learn how to Use the Translation System
We construct our translation system on high of NLLB-200-distilled-600M, a multilingual neural machine translation mannequin launched by Meta as a part of the No Language Left Behind (NLLB) mission. We had been impressed by the work of David Dale. Nevertheless, ongoing updates to the Transformers library have made the unique method tough to use. In our personal trials, rolling again to earlier variations (e.g., transformers ≤ 4.33) usually triggered conflicts with different dependencies. In gentle of those constraints, we offer a full checklist of libraries in our mission’s GitHub necessities.txt on your reference.

Our mannequin was fine-tuned on 42,868 Dongxiang–Chinese language bilingual sentence pairs. The coaching corpus combines publicly out there supplies with internally curated sources offered by native authorities companions, all of which had been processed and cleaned prematurely. Coaching was carried out utilizing Adafactor, a memory-efficient optimizer properly suited to massive transformer fashions. With the distilled structure, the total fine-tuning course of might be accomplished in underneath 12 hours on a single NVIDIA A100 GPU. All coaching configurations, hyperparameters, and experimental settings are documented throughout two coaching Jupyter notebooks. Fairly than counting on a single bidirectional mannequin, we educated two direction-specific fashions to assist Dongxiang–Chinese language and Chinese language–Dongxiang translation. Since NLLB is already pretrained on Chinese language, joint coaching underneath data-imbalanced circumstances tends to favor the better or extra dominant path. Consequently, efficiency positive aspects on the low-resource aspect (Dongxiang) are sometimes restricted. Nevertheless, NLLB does assist bidirectional translation in a single mannequin, and an easy method is to alternate translation instructions on the batch degree.
Listed here are the hyperlinks to our repository and web site.
GitHub Repository
GitHub-hosted web site
The mannequin can also be publicly out there on Hugging Face.
Chinese language → Dongxiang
Dongxiang → Chinese language
Mannequin Coaching: Step-by-Step Reproducible Pipeline
Earlier than following this pipeline to construct the mannequin, we assume that the reader has a fundamental understanding of Python and basic ideas in pure language processing. For readers who could also be much less acquainted with these subjects, Andrew Ng’s programs are a extremely advisable gateway. Personally, I additionally started my very own journey to this area by means of his course.
Step 1: Bilingual Dataset Processing
The primary stage of mannequin coaching focuses on establishing a bilingual dataset. Whereas parallel corpora for main languages can usually be obtained by leveraging present web-scraped sources, Dongxiang–Chinese language knowledge stays tough to amass. To assist transparency and reproducibility, and with consent from the related knowledge custodians, we’ve got launched each the uncooked corpus and a normalized model in our GitHub repository. The normalized dataset is produced by means of an easy preprocessing pipeline that removes extreme whitespace, standardizes punctuation, and ensures a transparent separation between scripts. Dongxiang textual content is restricted to Latin characters, whereas Chinese language textual content incorporates solely Chinese language characters.
Beneath is the code used for preprocessing:
import re
import pandas as pd
def split_lines(s: str):
if "n" in s and "n" not in s:
traces = s.cut up("n")
else:
traces = s.splitlines()
traces = [ln.strip().strip("'").strip() for ln in lines if ln.strip()]
return traces
def clean_dxg(s: str) -> str:
s = re.sub(r"[^A-Za-zs,.?]", " ", s)
s = re.sub(r"s+", " ", s).strip()
s = re.sub(r"[,.?]+$", "", s)
return s
def clean_zh(s: str) -> str:
s = re.sub(r"[^u4e00-u9fff,。?]", "", s)
s = re.sub(r"[,。?]+$", "", s)
return s
def make_pairs(uncooked: str) -> pd.DataFrame:
traces = split_lines(uncooked)
pairs = []
for i in vary(0, len(traces) - 1, 2):
dxg = clean_dxg(traces[i])
zh = clean_zh(traces[i+1])
if dxg or zh:
pairs.append({"Dongxiang": dxg, "Chinese language": zh})
return pd.DataFrame(pairs, columns=["Dongxiang", "Chinese"])
In follow, bilingual sentence-level pairs are most well-liked over word-level entries, and excessively lengthy sentences are cut up into shorter segments. This facilitates extra dependable cross-lingual alignment and results in extra steady and environment friendly mannequin coaching. Remoted dictionary entries shouldn’t be inserted into coaching inputs. With out surrounding context, the mannequin can not infer syntactic roles, or find out how phrases work together with surrounding tokens.

When parallel knowledge is proscribed, a typical different is to generate artificial supply sentences from monolingual target-language knowledge and pair them with the originals to kind pseudo-parallel corpora. This concept was popularized by Rico Sennrich, whose work on back-translation laid the groundwork for a lot of NMT pipelines. LLM-generated artificial knowledge is one other viable method. Prior work has proven that LLM-generated artificial knowledge is efficient in constructing translation techniques for Purépecha, an Indigenous language spoken in Mexico.
Step 2: Tokenizer Preparation
Earlier than textual content might be digested by a neural machine translation mannequin, it should be transformed into tokens. Tokens are discrete models, sometimes on the subword degree, that function the fundamental enter symbols for neural networks. Utilizing complete phrases as atomic models is impractical, because it results in excessively massive vocabularies and fast progress in mannequin dimensionality. Furthermore, word-level representations battle to generalize to unseen or uncommon phrases, whereas subword tokenization allows fashions to compose representations for novel phrase types.
The official NLLB documentation already supplies customary examples demonstrating how tokenization is dealt with. Owing to NLLB’s sturdy multilingual capability, most generally used writing techniques might be tokenized in an inexpensive and steady method. In our case, adopting the default NLLB multilingual tokenizer (Unigram-based) was enough to course of Dongxiang textual content.

Whether or not the tokenizer ought to be retrained is finest decided by two standards. The primary is protection: frequent occurrences of unknown tokens (
Total, NLLB demonstrates strong habits even on beforehand unseen languages. Consequently, tokenizer retraining is usually pointless except the goal language employs a extremely unconventional writing system and even lacks Unicode assist. Retraining a SentencePiece tokenizer additionally has implications for the embedding layer. New tokens begin with out pretrained embeddings and should be initialized utilizing random values or easy averaging.
Step 3: Language ID Registration
In sensible machine translation techniques equivalent to Google Translate, the supply and goal languages should be explicitly specified. NLLB adopts the identical assumption. Translation is ruled by express language tag, known as src_lang and tgt_lang, figuring out how textual content is encoded and generated inside the mannequin. When a language falls outdoors NLLB’s predefined scope, it should first be explicitly registered, together with a corresponding growth of the mannequin’s embedding layer. The embedding layer maps discrete tokens into steady vector representations, permitting the neural community to course of and study linguistic patterns in a numerical kind.
In our implementation, a customized language tag is added to the tokenizer as a further particular token, which assigns it a novel token ID. The mannequin’s token embedding matrix is then resized to accommodate the expanded vocabulary. The embedding vector related to the brand new language tag is initialized from a zero-centered regular distribution with a small variance, scaled by 0.02. If the newly launched language is carefully associated to an present supported language, its embedding can usually be educated on high of the present illustration area. Nevertheless, linguistic similarity alone doesn’t assure efficient switch studying. Variations in writing techniques can have an effect on tokenization. A well known instance is Moldovan, which is linguistically an identical to Romanian however is written within the Latin script, whereas it’s written in Cyrillic within the so-called Pridnestrovian Moldavian Republic. Regardless of the shut linguistic relationship, the distinction in script introduces distinct tokenization patterns.
The code used to register a brand new language is offered right here.
def fix_tokenizer(tokenizer, new_lang: str):
previous = checklist(tokenizer.additional_special_tokens)
if new_lang not in previous:
tokenizer.add_special_tokens(
{"additional_special_tokens": previous + [new_lang]})
return tokenizer.convert_tokens_to_ids(new_lang)
fix_tokenizer(tokenizer,"sce_Latn")
# we register Dongxiang as sce_Latn, and it ought to append to the final
# output 256204
print(tokenizer.convert_ids_to_tokens([256100,256204]))
print(tokenizer.convert_tokens_to_ids(['lao_Laoo','sce_Latn']))
# output
['lao_Laoo', 'sce_Latn']
[256100, 256204]
mannequin = AutoModelForSeq2SeqLM.from_pretrained("fb/nllb-200-distilled-600M")
mannequin.resize_token_embeddings(len(tokenizer))
new_id = fix_tokenizer(tokenizer, "sce_Latn")
embed_dim = mannequin.mannequin.shared.weight.measurement(1)
mannequin.mannequin.shared.weight.knowledge[new_id] = torch.randn(embed_dim) * 0.02
Step 4: Mannequin Coaching
We fine-tuned the interpretation mannequin utilizing the Adafactor optimizer, a memory-efficient optimization algorithm designed for large-scale sequence-to-sequence fashions. The coaching schedule begins with 500 warmup steps, throughout which the training price is progressively elevated as much as 1e-4 to stabilize early optimization and keep away from sudden gradient spikes. The mannequin is then educated for a complete of 8,000 optimization steps, with 64 sentence pairs per optimization step (batch). The utmost sequence size is ready to 128 tokens, and gradient clipping is utilized with a threshold of 1.0.
We initially deliberate to undertake early stopping. Nevertheless, because of the restricted measurement of the bilingual corpus, almost all out there bilingual knowledge was used for coaching, leaving solely a dozen-plus sentence pairs reserved for testing. Underneath these circumstances, a validation set of enough measurement was not out there. Subsequently, though our GitHub codebase consists of placeholders for early stopping, this mechanism was not actively utilized in follow.
Beneath is a snapshot of the important thing hyperparameters utilized in coaching.
optimizer = Adafactor(
[p for p in model.parameters() if p.requires_grad],
scale_parameter=False,
relative_step=False,
lr=1e-4,
clip_threshold=1.0,
weight_decay=1e-3,
)
batch_size = 64
max_length = 128
training_steps = 8000
warmup_steps = 500
Additionally it is price noting that, within the design of the loss perform, we undertake a computationally environment friendly coaching technique. The mannequin receives tokenized supply sentences as enter and generates the goal sequence incrementally. At every step, the expected token is in contrast towards the corresponding reference token within the goal sentence, and the coaching goal is computed utilizing token-level cross-entropy loss.
loss = mannequin(**x, labels=y.input_ids).loss
# Pseudocode beneath illustrates the underlying mechanism of the loss perform
for every batch:
x = tokenize(source_sentences) # enter: supply language tokens
y = tokenize(target_sentences) # goal: reference translation tokens
predictions = mannequin.ahead(x) # predict next-token distributions
loss = cross_entropy(predictions, y) # examine with reference tokens
backpropagate(loss)
update_model_parameters()
This formulation truly carries an implicit assumption: that the reference translation represents the one appropriate reply and that the mannequin’s output should align with it token by token. Underneath this assumption, any deviation from the reference is handled as an error. Even when a prediction conveys the identical concept utilizing totally different wording, synonyms, or an altered sentence construction.
The mismatch between token-level supervision and meaning-level correctness is especially problematic in low-resource and morphologically versatile languages. On the coaching stage, this challenge might be alleviated by stress-free strict token-level alignment and treating a number of paraphrased goal sentences as equally legitimate references. On the inference stage, as an alternative of choosing the highest-probability output, a set of candidate translations might be generated and re-ranked utilizing semantically knowledgeable standards (e.g., chrF).
Step 5: Mannequin Analysis
As soon as the mannequin is constructed, the subsequent step is to look at how properly it interprets. Translation high quality is formed not solely by the mannequin itself, but additionally by how the interpretation course of is configured at inference time. Underneath the NLLB framework, the goal language should be explicitly specified throughout era. That is achieved by means of the forced_bos_token_id parameter, which anchors the output to the supposed language. Output size is managed by means of two parameters. The primary is the minimal output allowance (a), which ensures a baseline variety of tokens that the mannequin is allowed to generate. The second is a scaling issue (b), which determines how the utmost output size grows in proportion to the enter size. The utmost variety of generated tokens is ready as a linear perform of the enter size, computed as a + b × input_length. As well as, max_input_length limits what number of enter tokens the mannequin reads.
This perform powers the Dongxiang → Chinese language translation.
import torch
from transformers import AutoModelForSeq2SeqLM, AutoTokenizer
gadget = "cuda" if torch.cuda.is_available() else "cpu"
MODEL_DIR3 = "/content material/drive/MyDrive/my_nllb_CD_model"
tokenizer3 = AutoTokenizer.from_pretrained(MODEL_DIR3)
model3 = AutoModelForSeq2SeqLM.from_pretrained(MODEL_DIR3).to(gadget)
model3.eval()
def translate3(textual content, src_lang="zho_Hans", tgt_lang="sce_Latn",
a=16, b=1.5, max_input_length=1024, **kwargs):
tokenizer3.src_lang = src_lang
inputs = tokenizer3(textual content, return_tensors="pt", padding=True,
truncation=True, max_length=max_input_length).to(model3.gadget)
outcome = model3.generate(
**inputs,
forced_bos_token_id=tokenizer3.convert_tokens_to_ids(tgt_lang),
max_new_tokens=int(a + b * inputs.input_ids.form[1]),
**kwargs
)
outputs = tokenizer3.batch_decode(outcome, skip_special_tokens=True)
return outputs
Mannequin high quality is then assessed utilizing a mix of computerized analysis metrics and human judgment. On the quantitative aspect, we report customary machine translation metrics equivalent to BLEU and ChrF++. BLEU scores had been computed utilizing customary BLEU-4, which measures word-level n-gram overlap from unigrams to four-grams and combines them utilizing a geometrical imply with brevity penalty. ChrF++ was calculated over character-level n-grams and reported as an F-score. It ought to be famous that the present analysis is preliminary. As a result of restricted knowledge availability at this early stage, BLEU and ChrF++ scores had been computed on only some dozen held-out sentence pairs. Our mannequin achieved the next outcomes:
Dongxiang → Chinese language (DX→ZH)
BLEU-4: 44.00
ChrF++: 34.3
Chinese language → Dongxiang (ZH→DX)
BLEU-4: 46.23
ChrF++: 59.80
BLEU-4 scores above 40 are typically thought to be sturdy in low-resource settings, indicating that the mannequin captures sentence construction and key lexical selections with cheap accuracy. The decrease chrF++ rating within the Dongxiang → Chinese language path is predicted and doesn’t essentially point out poor translation high quality, as Chinese language permits substantial surface-level variation in phrase selection and sentence construction, which reduces character-level overlap with a single reference translation.
In parallel, bilingual evaluators fluent in each languages reported that the mannequin performs reliably on easy sentences, equivalent to these following fundamental topic–verb–object constructions. Efficiency degrades on longer and extra complicated sentences. Whereas these outcomes are encouraging, in addition they point out that additional enchancment remains to be required.
Step 6: Deployment
On the present stage, we deploy the mission by means of a light-weight setup by internet hosting the documentation and demo interface on GitHub Pages, whereas releasing the educated fashions on Hugging Face. This method allows public entry and group engagement with out incurring further infrastructure prices. Particulars relating to GitHub-based deployment and Hugging Face mannequin internet hosting comply with the official documentation offered by GitHub Pages and the Hugging Face Hub, respectively.
This script uploads a regionally educated Hugging Face–appropriate mannequin.
import os
from huggingface_hub import HfApi, HfFolder
# Load the Hugging Face entry token
token = os.environ.get("HF_TOKEN")
HfFolder.save_token(token)
# Path to the native listing containing the educated mannequin artifacts
local_dir = "/path/to/your/local_model_directory"
# Goal Hugging Face Hub repository ID within the format: username/repo_name
repo_id = "your_username/your_model_name"
# Add the complete mannequin listing to the Hugging Face Mannequin Hub
api = HfApi()
api.upload_folder(
folder_path=local_dir,
repo_id=repo_id,
repo_type="mannequin",
)
Following mannequin launch, a Gradio-based interface is deployed as a Hugging Face House and embedded into the mission’s GitHub Pages website. In comparison with Docker-based self-deployment, utilizing Hugging Face Areas with Gradio avoids the price of sustaining devoted cloud infrastructure.

Reflection
All through the mission, knowledge preparation, not mannequin coaching, dominated the general workload. The time spent cleansing, validating, and aligning Dongxiang–Chinese language knowledge far exceeded the time required to fine-tune the mannequin itself. With out native authorities involvement and the assist of native and bilingual audio system, finishing this work wouldn’t have been doable. From a technical perspective, this imbalance highlights a broader challenge of illustration in multilingual NLP. Low-resource languages equivalent to Dongxiang are underrepresented not attributable to inherent linguistic complexity, however as a result of the info required to assist them is pricey to acquire and depends closely on human experience.
At its core, this mission digitizes a printed bilingual dictionary and constructs a fundamental translation system. For a group of fewer than a million folks, these incremental steps play an outsized function in guaranteeing that the language will not be excluded from fashionable language applied sciences. Lastly, let’s take a second to understand the breathtaking surroundings of Dongxiang Autonomous County!

Contact
This text was collectively written by Kaixuan Chen and Bo Ma, who had been classmates within the Division of Statistics on the College of North Carolina — Chapel Hill. Kaixuan Chen is at the moment pursuing a grasp’s diploma at Northwestern College, whereas Bo Ma is pursuing a grasp’s diploma on the College of California, San Diego. Each authors are open to skilled alternatives.
If you’re serious about our work or wish to join, be happy to achieve out:
Challenge GitHub: https://github.com/dongxiangtranslationproject
Kaixuan Chen: [email protected]
Bo Ma: [email protected]
















