At Azurro, we consistently place importance on using the Open Source technologies – this statement relates to our projects and usual, everyday activities. We have decided to share another language model trained by us – APT2-1B-Base. We are confident that smaller language models have great potential, and direct access to them for all people that are interested in such models contributes to the development of this dynamically changing field.


  1. We use 1 consumer graphic card

We know that training large language models requires a lot of computing power and it is usually meant for the major players on the market. However, we want to show that individuals or small companies can also achieve such goals with relatively low costs.

  1. We train the model only with the Polish corpus

The models trained in multiple languages can have difficulties with the tasks in Polish. Our purpose was to create a model that is focused on our native language.

  1. We use manually selected, high quality texts for training the model

The quality of the data used for training has a significant impact on the quality of the model. Therefore we decided to select the texts manually and to avoid the datasources with low quality texts.

Why have we made such statements?

Training a language model is an expensive task. It requires significantly more computational power than simply using a model (often several times more). Therefore, if we can run a model on a graphics card that has 6 GB VRAM, we need at least 24 GB VRAM for its training.

Modern consumer computers are equipped with powerful graphics cards that can be used for training a model at one’s own home. This is why we have decided to use a top consumer graphic card – Nvidia’s RTX 4090 24GB VRAM.

Existing language models are mainly trained on data in the English language, with a small mix of other languages, including Polish. This often leads to difficulties in handling our native language. Even the popular GPT-3.5 model from OpenAI often struggles with proper use of Polish. On the other hand, open models like Llama, Falcon, or Mistral perform even worse. This is why we focused on creating a model based solely on the Polish language, which allowed us to achieve significantly better quality in this field.

It is worth emphasizing that the quality of the models corresponds to the quality of the data used for training. Therefore, considering the limited model size, we made sure to carefully select the texts. We avoided using sources that may contain a lot of low-quality data. Our team prepared its own set of texts that were carefully processed and used during the training of the model.

Model – technical information

APT2-1B-Base is a base model introducing a new series of the APT2 (Azurro Pretrained Transformer) models. It has been trained with the use of an original open source framework called ALLaMo. This framework allows the user to train language models similar to the Meta AI’s LLaMA models quickly and efficiently.

APT2-1B-Base is an autoregressive language model based on the architecture of a transformer. It has been trained with data collected before April 2023.

30 billion tokens have been used for training, and the training dataset (the Polish corpus) has over 7 billion tokens.

A special tokenizer has been prepared and trained for the purpose of training the model.


Model description:

  • developed by: Azurro 
  • language: Polish
  • model type: causal decoder-only
  • license: CC BY NC 4.0 (non-commercial use)
  • – available at: HuggingFace

Model details:

  • – model parameters: 954M
  • – sequence length: 512
  • – vocabulary size: 8000
  • – layers: 73
  • – heads: 16
  • – d_head: 64
  • – d_model: 1024
  • – dropout: 0.0
  • – no bias
  • – positional encoding: RoPE
  • – activation function: SwiGLU
  • – normalizing function: RMSNorm
  • – intermediate size: 2816
  • – norm epsilon: 1e-06


Training hyperparameters:

  • – micro batch size: 4
  • – gradient accumulation steps: 256
  • – batch size: 524288
  • – learning rate: 5e-04
  • – optimizer: AdamW
  • – (β1, β2) = (0.9, 0.95)
  • – adam_eps = 1e−8
  • – weight decay: 0.1
  • – grad clip: 1.0

Tokenizer details:

  • – type: BPE
  • – special tokens: 7
  • – alphabet size: 112 
  • – vocabulary size: 8000


Collecting a large amount of high quality training data is a great challenge. Over the past years at Azurro, we have done a lot of projects connected with processing Big Data. Therefore, with our extensive experience, we have been able to prepare carefully selected training dataset quickly and efficiently.

Our training dataset contains:

  • – e-books: 1600 mln tokens
  • – Polish Wikipedia: 970 mln tokens
  • – web crawl data: 4600 mln tokens


This model can be easily loaded using the AutoModelForCausalLM functionality.

from transformers import AutoTokenizer, AutoModelForCausalLM

model_name = „Azurro/APT2-1B-Base”

tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(model_name)

In order to reduce the memory usage, you can use smaller precision (bfloat16).

import torch

model = AutoModelForCausalLM.from_pretrained(model_name, torch_dtype=torch.bfloat16)

And then you can use Hugging Face Pipelines to generate text.

import transformers

text = „Najważniejszym celem człowieka na ziemi jest”

pipeline = transformers.pipeline(„text-generation”, model=model, tokenizer=tokenizer)
sequences = pipeline(max_new_tokens=100, do_sample=True, top_k=50, eos_token_id=tokenizer.eos_token_id)
for seq in sequences:
print(f”Result: {seq[’generated_text’]}”)

Generated output:

 „Najważniejszym celem człowieka na ziemi jest życie w harmonii z naturą. Człowiek powinien dążyć do tego, aby jego ciało i umysł były zdrowe i sprawne. W życiu należy kierować się zasadami etycznymi. W średniowieczu bardzo popularny był pogląd mówiący o tym, że człowiek jest istotą grzeszną. Poglądy te znalazły swój wyraz w literaturze. W utworach tych możemy odnaleźć motywy cierpienia, miłości, śmierci, życia pozagrobowego.

Limitations and Biases

APT2-1B-Base is not intended for deployment without fine-tuning. It should not be used for human-facing interactions without further guardrails and user consent.

APT2-1B-Base can produce factually incorrect output, and should not be relied on to produce factually accurate information. APT2-1B-Base was trained on various public datasets. While great efforts have been taken to clean the pretraining data, it is possible that this model could generate lewd, biased or otherwise offensive outputs.


Because of an unclear legal situation, we have decided to publish the model under CC BY NC 4.0 license – it allows for non-commercial use. The model can be used for scientific purposes and privately, as long as the license conditions are met.


The license on this model does not constitute legal advice. We are not responsible for the actions of third parties who use this model.


Please cite this model using the following format:


    author    = {Krzysztof Ociepa, Azurro},

    title     = {APT2-1B-Base: polski otwarty model językowy},

    year      = {2023},

    url       = {},

    note      = {Accessed: 2023-10-04}, % change this date

    urldate   = {2023-10-04} % change this date