""" To implement this filter, we begin by ranking URL domains according to the volume of texts they contribute to the FineWeb (Penedo et al., 2024a) and FineWeb-2 (Penedo et al., 2025) corpus, as an approximation of web-level English and multilingual data. From this ranking, we select the top one million English domains and the top one million non-English domains. Due to domain overlap and the fact that some sites are now offline, the total number of accessible robots.txt files is smaller than two million. For each domain that remains reachable, we retrieve its robots.txt file as of January 2025 and examine the directives relevant to AI training. In particular, we focus on those targeting the AI-specific user agents listed in Appendix A. Any contents blocked by the current robots.txt is removed retroactively from the entire 2013-2024 range of the training dataset. We follow an opt-out policy, that is, if the corresponding robots.txt files are not available, we consider the data usable for training. The filtering process results in an estimated token loss of approximately 8% in English data and 4% in multilingual data. """
Why not check historical versions of the robots.txt (e.g. archive.org) and contain the retroactive cutoff to a certain date range, parsing the robots.txt accordingly? That might increase the corpus size within legal and fair use boundaries.
we went a step further because back in old ages (2013 is our oldest training data) LLMs did not exist, so website owners opting out today of AI crawlers might like the option to also remove their past contents.
arguments can be made either way but we tried to remain on the cautious side at this point.
we also wrote a paper on how this additional removal affects downstream performance of the LLM https://arxiv.org/abs/2504.06219 (it does so surprisingly little)
Key features
Fully open model: open weights + open data + full training details including all data and training recipes
Massively Multilingual: 1811 natively supported languages
Compliant: Apertus is trained while respecting opt-out consent of data owners (even retrospectivey), and avoiding memorization of training data
the full collection of models is here: https://huggingface.co/collections/swiss-ai/apertus-llm-68b6...
PS: you can run this locally on your mac with this one-liner:
pip install mlx-lm
mlx_lm.generate --model mlx-community/Apertus-8B-Instruct-2509-8bit --prompt "who are you?"
In the US, many state governments have anti-indemnify laws that restrict the state government agencies (including state universities) from agreeing to contracts and agreements with such language. I'd love to make this available to researchers at my university, but I'm not sure I can click through such an agreement (similar problems exist with other LLMs).
It is Apache 2 and I don't see anything that prohibits another contracting party from agreeing to the Apertus LLM Acceptable Use Policy and redistributing with just Apache 2 and without the AUP. Maybe this provides a solution? Unless I'm missing something?
> Once a production environment has been set up, we estimate that the model can be realistically trained in approximately 90 days on 4096 GPUs, accounting for overheads. If we assume 560 W power usage per Grace-Hopper module in this period, below the set power limit of 660 W, we can estimate 5 GWh power usage for the compute of the pretraining run.
"Apertus is a 70B and 8B parameter language model designed to push the boundaries of fully-open multilingual and transparent models. The model supports over 1000 languages and long context, it uses only fully compliant and open training data, and achieves comparable performance to models trained behind closed doors."
"pretrained on 15T tokens with a staged curriculum of web, code and math data"
"open weights + open data + full training details including all data and training recipes"
"Apertus is trained while respecting opt-out consent of data owners (even retrospectivey), and avoiding memorization of training data"
> "open weights + open data + full training details including all data and training recipes"
Is it reproducible?
> respecting opt-out consent of data owners (even retrospectivey)
Were they notified and given an option to opt out? Owners and authors are not the same. Data owners aren't copyright owners either.
> avoiding memorization of training data
Not convincing.
Assumedly, an organization training and then distributing this model cannot be stopped via copyright or breach of contract lawsuit. It may be that folks will figure out copyright-free versions of text-to-image and text-to-video, etc., models as well.
It seems that there is plenty of copyright-free data available to train a useful model. Therefore, when content creators upset about AI companies training models on their content are asked about this model, they have nothing to do but shrug.
The cat is out of the bag.
- model sizes that the industry was at 2-3 gens ago (llama 3.1 era) - Conspicuous lack of benchmark results in announcements - not on openrouter, no ggufs as yet
quantizations: available now in MLX https://github.com/ml-explore/mlx-lm (gguf coming soon, not trivial due to new architecture)
model sizes: still many good dense models today lie in the range between our small and large chosen sizes
Note that we have a specific focus on multilinguality (over 1000 languages supported), not only on english
It’s easy to become jaded with so many huge models being released, but the reality is they are still from a relatively small group of countries.
For example India has no indigenous models this big despite having a world class talent pool.
Capital though ;)
[I am a grad student here in reinforcement learning]
Anyways, among all the VC/made-at-home driven snake oil, I'd say you should look at sarvam.ai, they are the most focussed and no-nonsense group. They have a few good from-scratch models (I believe upto 7B or 14B), as well as a few llama finetunes. Their API is pretty good.
The main thing folks here are attempting are to get LLMs good at local indian languages (and I don't mean hindi). I don't think people see a value in creating an "indigenous llama" that doesn't have that property. For this, the main bottleneck is data (relatively speaking, there is zero data in those languages on the internet), so there's a team AI4Bharat whose main job is curating datasets good enough to get stuff like _translation_ and other NLP benchmarks working well. LLMs too, for which they work with sarvam frequently.
Are there any plans for further models after this one?
I can't imagine that this actually complies with the law.
>> "The "o" stands for "open", "openness", "open source" and is placed where a "TM" symbol (indicating patents, trademarks, protection) would normally reside. Instead openness is the apertus° trademark."
It's also a completely different kind of thing so trademark probably wouldn't come into it even if they had one.
> Unlike many prior models that release weights without reproducible data pipelines or regard for content-owner rights, Apertus models are pretrained exclusively on openly available data, retroactively respecting robots.txt exclusions and filtering for copyrighted, non-permissive, toxic, and personally identifiable content.