Switch to 2.0 branch (#1152)

* Adapt boards to v2 partition tables

* fix esp log error

* fix display style

* reset emotion after download assets

* fix compiling

* update assets default url

* Add user only tools

* Add image cache

* smaller cache and buffer, more heap

* use MAIN_EVENT_CLOCK_TICK to avoid audio glitches

* bump to 2.0.0

* fix compiling errors

---------

Co-authored-by: Xiaoxia <terrence.huang@tenclass.com>
This commit is contained in:
Xiaoxia
2025-09-04 15:41:28 +08:00
committed by GitHub
parent 3a3dfc003e
commit 83f6f8c703
196 changed files with 3918 additions and 4902 deletions

View File

@@ -0,0 +1,110 @@
# SPIFFS Assets Builder
这个脚本用于构建 ESP32 项目的 SPIFFS 资源分区,将各种资源文件打包成可在设备上使用的格式。
## 功能特性
- 处理唤醒网络模型 (WakeNet Model)
- 集成文本字体文件
- 处理表情符号图片集合
- 自动生成资源索引文件
- 打包生成最终的 `assets.bin` 文件
## 依赖要求
- Python 3.6+
- 相关资源文件
## 使用方法
### 基本语法
```bash
./build.py --wakenet_model <wakenet_model_dir> \
--text_font <text_font_file> \
--emoji_collection <emoji_collection_dir>
```
### 参数说明
| 参数 | 类型 | 必需 | 说明 |
|------|------|------|------|
| `--wakenet_model` | 目录路径 | 否 | 唤醒网络模型目录路径 |
| `--text_font` | 文件路径 | 否 | 文本字体文件路径 |
| `--emoji_collection` | 目录路径 | 否 | 表情符号图片集合目录路径 |
### 使用示例
```bash
# 完整参数示例
./build.py \
--wakenet_model ../../managed_components/espressif__esp-sr/model/wakenet_model/wn9_nihaoxiaozhi_tts \
--text_font ../../components/xiaozhi-fonts/build/font_puhui_common_20_4.bin \
--emoji_collection ../../components/xiaozhi-fonts/build/emojis_64/
# 仅处理字体文件
./build.py --text_font ../../components/xiaozhi-fonts/build/font_puhui_common_20_4.bin
# 仅处理表情符号
./build.py --emoji_collection ../../components/xiaozhi-fonts/build/emojis_64/
```
## 工作流程
1. **创建构建目录结构**
- `build/` - 主构建目录
- `build/assets/` - 资源文件目录
- `build/output/` - 输出文件目录
2. **处理唤醒网络模型**
- 复制模型文件到构建目录
- 使用 `pack_model.py` 生成 `srmodels.bin`
- 将生成的模型文件复制到资源目录
3. **处理文本字体**
- 复制字体文件到资源目录
- 支持 `.bin` 格式的字体文件
4. **处理表情符号集合**
- 扫描指定目录中的图片文件
- 支持 `.png``.gif` 格式
- 自动生成表情符号索引
5. **生成配置文件**
- `index.json` - 资源索引文件
- `config.json` - 构建配置文件
6. **打包最终资源**
- 使用 `spiffs_assets_gen.py` 生成 `assets.bin`
- 复制到构建根目录
## 输出文件
构建完成后,会在 `build/` 目录下生成以下文件:
- `assets/` - 所有资源文件
- `assets.bin` - 最终的 SPIFFS 资源文件
- `config.json` - 构建配置
- `output/` - 中间输出文件
## 支持的资源格式
- **模型文件**: `.bin` (通过 pack_model.py 处理)
- **字体文件**: `.bin`
- **图片文件**: `.png`, `.gif`
- **配置文件**: `.json`
## 错误处理
脚本包含完善的错误处理机制:
- 检查源文件/目录是否存在
- 验证子进程执行结果
- 提供详细的错误信息和警告
## 注意事项
1. 确保所有依赖的 Python 脚本都在同一目录下
2. 资源文件路径使用绝对路径或相对于脚本目录的路径
3. 构建过程会清理之前的构建文件
4. 生成的 `assets.bin` 文件大小受 SPIFFS 分区大小限制

223
scripts/spiffs_assets/build.py Executable file
View File

@@ -0,0 +1,223 @@
#!/usr/bin/env python3
"""
Build the spiffs assets partition
Usage:
./build.py --wakenet_model <wakenet_model_dir> \
--text_font <text_font_file> \
--emoji_collection <emoji_collection_dir>
Example:
./build.py --wakenet_model ../../managed_components/espressif__esp-sr/model/wakenet_model/wn9_nihaoxiaozhi_tts \
--text_font ../../components/xiaozhi-fonts/build/font_puhui_common_20_4.bin \
--emoji_collection ../../components/xiaozhi-fonts/build/emojis_64/
"""
import os
import sys
import shutil
import argparse
import subprocess
import json
from pathlib import Path
def ensure_dir(directory):
"""Ensure directory exists, create if not"""
os.makedirs(directory, exist_ok=True)
def copy_file(src, dst):
"""Copy file"""
if os.path.exists(src):
shutil.copy2(src, dst)
print(f"Copied: {src} -> {dst}")
else:
print(f"Warning: Source file does not exist: {src}")
def copy_directory(src, dst):
"""Copy directory"""
if os.path.exists(src):
shutil.copytree(src, dst, dirs_exist_ok=True)
print(f"Copied directory: {src} -> {dst}")
else:
print(f"Warning: Source directory does not exist: {src}")
def process_wakenet_model(wakenet_model_dir, build_dir, assets_dir):
"""Process wakenet_model parameter"""
if not wakenet_model_dir:
return None
# Copy input directory to build directory
wakenet_build_dir = os.path.join(build_dir, "wakenet_model")
if os.path.exists(wakenet_build_dir):
shutil.rmtree(wakenet_build_dir)
copy_directory(wakenet_model_dir, os.path.join(wakenet_build_dir, os.path.basename(wakenet_model_dir)))
# Use pack_model.py to generate srmodels.bin
srmodels_output = os.path.join(wakenet_build_dir, "srmodels.bin")
try:
subprocess.run([
sys.executable, "pack_model.py",
"-m", wakenet_build_dir,
"-o", "srmodels.bin"
], check=True, cwd=os.path.dirname(__file__))
print(f"Generated: {srmodels_output}")
# Copy srmodels.bin to assets directory
copy_file(srmodels_output, os.path.join(assets_dir, "srmodels.bin"))
return "srmodels.bin"
except subprocess.CalledProcessError as e:
print(f"Error: Failed to generate srmodels.bin: {e}")
return None
def process_text_font(text_font_file, assets_dir):
"""Process text_font parameter"""
if not text_font_file:
return None
# Copy input file to build/assets directory
font_filename = os.path.basename(text_font_file)
font_dst = os.path.join(assets_dir, font_filename)
copy_file(text_font_file, font_dst)
return font_filename
def process_emoji_collection(emoji_collection_dir, assets_dir):
"""Process emoji_collection parameter"""
if not emoji_collection_dir:
return []
emoji_list = []
# Copy each image from input directory to build/assets directory
for root, dirs, files in os.walk(emoji_collection_dir):
for file in files:
if file.lower().endswith(('.png', '.gif')):
# Copy file
src_file = os.path.join(root, file)
dst_file = os.path.join(assets_dir, file)
copy_file(src_file, dst_file)
# Get filename without extension
filename_without_ext = os.path.splitext(file)[0]
# Add to emoji list
emoji_list.append({
"name": filename_without_ext,
"file": file
})
return emoji_list
def generate_index_json(assets_dir, srmodels, text_font, emoji_collection):
"""Generate index.json file"""
index_data = {
"version": 1
}
if srmodels:
index_data["srmodels"] = srmodels
if text_font:
index_data["text_font"] = text_font
if emoji_collection:
index_data["emoji_collection"] = emoji_collection
# Write index.json
index_path = os.path.join(assets_dir, "index.json")
with open(index_path, 'w', encoding='utf-8') as f:
json.dump(index_data, f, indent=4, ensure_ascii=False)
print(f"Generated: {index_path}")
def generate_config_json(build_dir, assets_dir):
"""Generate config.json file"""
# Get absolute path of current working directory
workspace_dir = os.path.abspath(os.path.join(os.path.dirname(__file__)))
config_data = {
"include_path": os.path.join(workspace_dir, "build/include"),
"assets_path": os.path.join(workspace_dir, "build/assets"),
"image_file": os.path.join(workspace_dir, "build/output/assets.bin"),
"lvgl_ver": "9.3.0",
"assets_size": "0x400000",
"support_format": ".png, .gif, .jpg, .bin, .json",
"name_length": "32",
"split_height": "0",
"support_qoi": False,
"support_spng": False,
"support_sjpg": False,
"support_sqoi": False,
"support_raw": False,
"support_raw_dither": False,
"support_raw_bgr": False
}
# Write config.json
config_path = os.path.join(build_dir, "config.json")
with open(config_path, 'w', encoding='utf-8') as f:
json.dump(config_data, f, indent=4, ensure_ascii=False)
print(f"Generated: {config_path}")
return config_path
def main():
parser = argparse.ArgumentParser(description='Build the spiffs assets partition')
parser.add_argument('--wakenet_model', help='Path to wakenet model directory')
parser.add_argument('--text_font', help='Path to text font file')
parser.add_argument('--emoji_collection', help='Path to emoji collection directory')
args = parser.parse_args()
# Get script directory
script_dir = os.path.dirname(os.path.abspath(__file__))
# Set directory paths
build_dir = os.path.join(script_dir, "build")
assets_dir = os.path.join(build_dir, "assets")
if os.path.exists(assets_dir):
shutil.rmtree(assets_dir)
# Ensure directories exist
ensure_dir(build_dir)
ensure_dir(assets_dir)
print("Starting to build SPIFFS assets partition...")
# Process each parameter
srmodels = process_wakenet_model(args.wakenet_model, build_dir, assets_dir)
text_font = process_text_font(args.text_font, assets_dir)
emoji_collection = process_emoji_collection(args.emoji_collection, assets_dir)
# Generate index.json
generate_index_json(assets_dir, srmodels, text_font, emoji_collection)
# Generate config.json
config_path = generate_config_json(build_dir, assets_dir)
# Use spiffs_assets_gen.py to package final build/assets.bin
try:
subprocess.run([
sys.executable, "spiffs_assets_gen.py",
"--config", config_path
], check=True, cwd=script_dir)
print("Successfully packaged assets.bin")
except subprocess.CalledProcessError as e:
print(f"Error: Failed to package assets.bin: {e}")
sys.exit(1)
# Copy build/output/assets.bin to build/assets.bin
shutil.copy(os.path.join(build_dir, "output", "assets.bin"), os.path.join(build_dir, "assets.bin"))
print("Build completed!")
if __name__ == "__main__":
main()

View File

@@ -0,0 +1,148 @@
#!/usr/bin/env python3
"""
Build multiple spiffs assets partitions with different parameter combinations
This script calls build.py with different combinations of:
- wakenet_models
- text_fonts
- emoji_collections
And generates assets.bin files with names like:
wn9_nihaoxiaozhi_tts-font_puhui_common_20_4-emojis_32.bin
"""
import os
import sys
import shutil
import subprocess
import argparse
from pathlib import Path
def ensure_dir(directory):
"""Ensure directory exists, create if not"""
os.makedirs(directory, exist_ok=True)
def get_file_path(base_dir, filename):
"""Get full path for a file, handling 'none' case"""
if filename == "none":
return None
return os.path.join(base_dir, f"{filename}.bin" if not filename.startswith("emojis_") else filename)
def build_assets(wakenet_model, text_font, emoji_collection, build_dir, final_dir):
"""Build assets.bin using build.py with given parameters"""
# Prepare arguments for build.py
cmd = [sys.executable, "build.py"]
if wakenet_model != "none":
wakenet_path = os.path.join("../../managed_components/espressif__esp-sr/model/wakenet_model", wakenet_model)
cmd.extend(["--wakenet_model", wakenet_path])
if text_font != "none":
text_font_path = os.path.join("../../components/xiaozhi-fonts/build", f"{text_font}.bin")
cmd.extend(["--text_font", text_font_path])
if emoji_collection != "none":
emoji_path = os.path.join("../../components/xiaozhi-fonts/build", emoji_collection)
cmd.extend(["--emoji_collection", emoji_path])
print(f"\n正在构建: {wakenet_model}-{text_font}-{emoji_collection}")
print(f"执行命令: {' '.join(cmd)}")
try:
# Run build.py
result = subprocess.run(cmd, check=True, cwd=os.path.dirname(__file__))
# Generate output filename
output_name = f"{wakenet_model}-{text_font}-{emoji_collection}.bin"
# Copy generated assets.bin to final directory with new name
src_path = os.path.join(build_dir, "assets.bin")
dst_path = os.path.join(final_dir, output_name)
if os.path.exists(src_path):
shutil.copy2(src_path, dst_path)
print(f"✓ 成功生成: {output_name}")
return True
else:
print(f"✗ 错误: 未找到生成的 assets.bin 文件")
return False
except subprocess.CalledProcessError as e:
print(f"✗ 构建失败: {e}")
return False
except Exception as e:
print(f"✗ 未知错误: {e}")
return False
def main():
# Configuration
wakenet_models = [
"none",
"wn9_nihaoxiaozhi_tts",
"wn9s_nihaoxiaozhi"
]
text_fonts = [
"none",
"font_puhui_common_14_1",
"font_puhui_common_16_4",
"font_puhui_common_20_4",
"font_puhui_common_30_4",
]
emoji_collections = [
"none",
"emojis_32",
"emojis_64",
]
# Get script directory
script_dir = os.path.dirname(os.path.abspath(__file__))
# Set directory paths
build_dir = os.path.join(script_dir, "build")
final_dir = os.path.join(build_dir, "final")
# Ensure directories exist
ensure_dir(build_dir)
ensure_dir(final_dir)
print("开始构建多个 SPIFFS assets 分区...")
print(f"输出目录: {final_dir}")
# Track successful builds
successful_builds = 0
total_combinations = len(wakenet_models) * len(text_fonts) * len(emoji_collections)
# Build all combinations
for wakenet_model in wakenet_models:
for text_font in text_fonts:
for emoji_collection in emoji_collections:
if build_assets(wakenet_model, text_font, emoji_collection, build_dir, final_dir):
successful_builds += 1
print(f"\n构建完成!")
print(f"成功构建: {successful_builds}/{total_combinations}")
print(f"输出文件位置: {final_dir}")
# List generated files
if os.path.exists(final_dir):
files = [f for f in os.listdir(final_dir) if f.endswith('.bin')]
if files:
print("\n生成的文件:")
for file in sorted(files):
file_size = os.path.getsize(os.path.join(final_dir, file))
print(f" {file} ({file_size:,} bytes)")
else:
print("\n未找到生成的 .bin 文件")
if __name__ == "__main__":
main()

View File

@@ -0,0 +1,123 @@
import os
import struct
import argparse
def struct_pack_string(string, max_len=None):
"""
pack string to binary data.
if max_len is None, max_len = len(string) + 1
else len(string) < max_len, the left will be padded by struct.pack('x')
string: input python string
max_len: output
"""
if max_len == None :
max_len = len(string)
else:
assert len(string) <= max_len
left_num = max_len - len(string)
out_bytes = None
for char in string:
if out_bytes == None:
out_bytes = struct.pack('b', ord(char))
else:
out_bytes += struct.pack('b', ord(char))
for i in range(left_num):
out_bytes += struct.pack('x')
return out_bytes
def read_data(filename):
"""
Read binary data, like index and mndata
"""
data = None
with open(filename, "rb") as f:
data = f.read()
return data
def pack_models(model_path, out_file="srmodels.bin"):
"""
Pack all models into one binary file by the following format:
{
model_num: int
model1_info: model_info_t
model2_info: model_info_t
...
model1_index,model1_data,model1_MODEL_INFO
model1_index,model1_data,model1_MODEL_INFO
...
}model_pack_t
{
model_name: char[32]
file_number: int
file1_name: char[32]
file1_start: int
file1_len: int
file2_name: char[32]
file2_start: int // data_len = info_start - data_start
file2_len: int
...
}model_info_t
model_path: the path of models
out_file: the ouput binary filename
"""
models = {}
file_num = 0
model_num = 0
for root, dirs, _ in os.walk(model_path):
for model_name in dirs:
models[model_name] = {}
model_dir = os.path.join(root, model_name)
model_num += 1
for _, _, files in os.walk(model_dir):
for file_name in files:
file_num += 1
file_path = os.path.join(model_dir, file_name)
models[model_name][file_name] = read_data(file_path)
model_num = len(models)
header_len = 4 + model_num*(32+4) + file_num*(32+4+4)
out_bin = struct.pack('I', model_num) # model number
data_bin = None
for key in models:
model_bin = struct_pack_string(key, 32) # + model name
model_bin += struct.pack('I', len(models[key])) # + file number in this model
for file_name in models[key]:
model_bin += struct_pack_string(file_name, 32) # + file name
if data_bin == None:
model_bin += struct.pack('I', header_len)
data_bin = models[key][file_name]
model_bin += struct.pack('I', len(models[key][file_name]))
# print(file_name, header_len, len(models[key][file_name]), len(data_bin))
else:
model_bin += struct.pack('I', header_len+len(data_bin))
# print(file_name, header_len+len(data_bin), len(models[key][file_name]))
data_bin += models[key][file_name]
model_bin += struct.pack('I', len(models[key][file_name]))
out_bin += model_bin
assert len(out_bin) == header_len
if data_bin != None:
out_bin += data_bin
out_file = os.path.join(model_path, out_file)
with open(out_file, "wb") as f:
f.write(out_bin)
if __name__ == "__main__":
# input parameter
parser = argparse.ArgumentParser(description='Model package tool')
parser.add_argument('-m', '--model_path', help="the path of model files")
parser.add_argument('-o', '--out_file', default="srmodels.bin", help="the path of binary file")
args = parser.parse_args()
# convert(args.model_path, args.out_file)
pack_models(model_path=args.model_path, out_file=args.out_file)

View File

@@ -0,0 +1,647 @@
# SPDX-FileCopyrightText: 2024-2025 Espressif Systems (Shanghai) CO LTD
# SPDX-License-Identifier: Apache-2.0
import io
import os
import argparse
import json
import shutil
import math
import sys
import time
import numpy as np
import importlib
import subprocess
import urllib.request
from PIL import Image
from datetime import datetime
from dataclasses import dataclass
from typing import List
from pathlib import Path
from packaging import version
sys.dont_write_bytecode = True
GREEN = '\033[1;32m'
RED = '\033[1;31m'
RESET = '\033[0m'
@dataclass
class AssetCopyConfig:
assets_path: str
target_path: str
spng_enable: bool
sjpg_enable: bool
qoi_enable: bool
sqoi_enable: bool
row_enable: bool
support_format: List[str]
split_height: int
@dataclass
class PackModelsConfig:
target_path: str
include_path: str
image_file: str
assets_path: str
name_length: int
def generate_header_filename(path):
asset_name = os.path.basename(path)
header_filename = f'mmap_generate_{asset_name}.h'
return header_filename
def compute_checksum(data):
checksum = sum(data) & 0xFFFF
return checksum
def sort_key(filename):
basename, extension = os.path.splitext(filename)
return extension, basename
def download_v8_script(convert_path):
"""
Ensure that the lvgl_image_converter repository is present at the specified path.
If not, clone the repository. Then, checkout to a specific commit.
Parameters:
- convert_path (str): The directory path where lvgl_image_converter should be located.
"""
# Check if convert_path is not empty
if convert_path:
# If the directory does not exist, create it and clone the repository
if not os.path.exists(convert_path):
os.makedirs(convert_path, exist_ok=True)
try:
subprocess.run(
['git', 'clone', 'https://github.com/W-Mai/lvgl_image_converter.git', convert_path],
stdout=subprocess.DEVNULL,
stderr=subprocess.DEVNULL,
check=True
)
except subprocess.CalledProcessError as e:
print(f'Git clone failed: {e}')
sys.exit(1)
# Checkout to the specific commit
try:
subprocess.run(
['git', 'checkout', '9174634e9dcc1b21a63668969406897aad650f35'],
cwd=convert_path,
stdout=subprocess.DEVNULL,
stderr=subprocess.DEVNULL,
check=True
)
except subprocess.CalledProcessError as e:
print(f'Failed to checkout to the specific commit: {e}')
sys.exit(1)
else:
print('Error: convert_path is NULL')
sys.exit(1)
def download_v9_script(url: str, destination: str) -> None:
"""
Download a Python script from a URL to a local destination.
Parameters:
- url (str): URL to download the script from.
- destination (str): Local path to save the downloaded script.
Raises:
- Exception: If the download fails.
"""
file_path = Path(destination)
# Check if the file already exists
if file_path.exists():
if file_path.is_file():
return
try:
# Create the parent directories if they do not exist
file_path.parent.mkdir(parents=True, exist_ok=True)
# Open the URL and retrieve the data
with urllib.request.urlopen(url) as response, open(file_path, 'wb') as out_file:
data = response.read() # Read the entire response
out_file.write(data) # Write data to the local file
except urllib.error.HTTPError as e:
print(f'HTTP Error: {e.code} - {e.reason} when accessing {url}')
sys.exit(1)
except urllib.error.URLError as e:
print(f'URL Error: {e.reason} when accessing {url}')
sys.exit(1)
except Exception as e:
print(f'An unexpected error occurred: {e}')
sys.exit(1)
def split_image(im, block_size, input_dir, ext, convert_to_qoi):
"""Splits the image into blocks based on the block size."""
width, height = im.size
if block_size:
splits = math.ceil(height / block_size)
else:
splits = 1
for i in range(splits):
if i < splits - 1:
crop = im.crop((0, i * block_size, width, (i + 1) * block_size))
else:
crop = im.crop((0, i * block_size, width, height))
output_path = os.path.join(input_dir, str(i) + ext)
crop.save(output_path, quality=100)
qoi_module = importlib.import_module('qoi-conv.qoi')
Qoi = qoi_module.Qoi
replace_extension = qoi_module.replace_extension
if convert_to_qoi:
with Image.open(output_path) as img:
if img.mode != 'RGBA':
img = img.convert('RGBA')
img_data = np.asarray(img)
out_path = qoi_module.replace_extension(output_path, 'qoi')
new_image = qoi_module.Qoi().save(out_path, img_data)
os.remove(output_path)
return width, height, splits
def create_header(width, height, splits, split_height, lenbuf, ext):
"""Creates the header for the output file based on the format."""
header = bytearray()
if ext.lower() == '.jpg':
header += bytearray('_SJPG__'.encode('UTF-8'))
elif ext.lower() == '.png':
header += bytearray('_SPNG__'.encode('UTF-8'))
elif ext.lower() == '.qoi':
header += bytearray('_SQOI__'.encode('UTF-8'))
# 6 BYTES VERSION
header += bytearray(('\x00V1.00\x00').encode('UTF-8'))
# WIDTH 2 BYTES
header += width.to_bytes(2, byteorder='little')
# HEIGHT 2 BYTES
header += height.to_bytes(2, byteorder='little')
# NUMBER OF ITEMS 2 BYTES
header += splits.to_bytes(2, byteorder='little')
# SPLIT HEIGHT 2 BYTES
header += split_height.to_bytes(2, byteorder='little')
for item_len in lenbuf:
# LENGTH 2 BYTES
header += item_len.to_bytes(2, byteorder='little')
return header
def save_image(output_file_path, header, split_data):
"""Saves the image with the constructed header and split data."""
with open(output_file_path, 'wb') as f:
if header is not None:
f.write(header + split_data)
else:
f.write(split_data)
def handle_lvgl_version_v9(input_file: str, input_dir: str,
input_filename: str, convert_path: str) -> None:
"""
Handle conversion for LVGL versions greater than 9.0.
Parameters:
- input_file (str): Path to the input image file.
- input_dir (str): Directory of the input file.
- input_filename (str): Name of the input file.
- convert_path (str): Path for conversion scripts and outputs.
"""
convert_file = os.path.join(convert_path, 'LVGLImage.py')
lvgl_image_url = 'https://raw.githubusercontent.com/lvgl/lvgl/master/scripts/LVGLImage.py'
download_v9_script(url=lvgl_image_url, destination=convert_file)
lvgl_script = Path(convert_file)
cmd = [
'python',
str(lvgl_script),
'--ofmt', 'BIN',
'--cf', config_data['support_raw_cf'],
'--compress', 'NONE',
'--output', str(input_dir),
input_file
]
try:
result = subprocess.run(
cmd,
check=True,
stdout=subprocess.PIPE,
stderr=subprocess.PIPE,
text=True
)
print(f'Completed {input_filename} -> BIN')
except subprocess.CalledProcessError as e:
print('An error occurred while executing LVGLImage.py:')
print(e.stderr)
sys.exit(e.returncode)
def handle_lvgl_version_v8(input_file: str, input_dir: str, input_filename: str, convert_path: str) -> None:
"""
Handle conversion for supported LVGL versions (<= 9.0).
Parameters:
- input_file (str): Path to the input image file.
- input_dir (str): Directory of the input file.
- input_filename (str): Name of the input file.
- convert_path (str): Path for conversion scripts and outputs.
"""
download_v8_script(convert_path=convert_path)
if convert_path not in sys.path:
sys.path.append(convert_path)
try:
import lv_img_conv
except ImportError as e:
print(f"Failed to import 'lv_img_conv' from '{convert_path}': {e}")
sys.exit(1)
try:
lv_img_conv.conv_one_file(
root=Path(input_dir),
filepath=Path(input_file),
f=config_data['support_raw_ff'],
cf=config_data['support_raw_cf'],
ff='BIN',
dither=config_data['support_raw_dither'],
bgr_mode=config_data['support_raw_bgr'],
)
print(f'Completed {input_filename} -> BIN')
except KeyError as e:
print(f'Missing configuration key: {e}')
sys.exit(1)
except Exception as e:
print(f'An error occurred during conversion: {e}')
sys.exit(1)
def process_image(input_file, height_str, output_extension, convert_to_qoi=False):
"""Main function to process the image and save it as .sjpg, .spng, or .sqoi."""
try:
SPLIT_HEIGHT = int(height_str)
if SPLIT_HEIGHT < 0:
raise ValueError('Height must be a positive integer')
except ValueError as e:
print('Error: Height must be a positive integer')
sys.exit(1)
input_dir, input_filename = os.path.split(input_file)
base_filename, ext = os.path.splitext(input_filename)
OUTPUT_FILE_NAME = base_filename
try:
im = Image.open(input_file)
except Exception as e:
print('Error:', e)
sys.exit(0)
width, height, splits = split_image(im, SPLIT_HEIGHT, input_dir, ext, convert_to_qoi)
split_data = bytearray()
lenbuf = []
if convert_to_qoi:
ext = '.qoi'
for i in range(splits):
with open(os.path.join(input_dir, str(i) + ext), 'rb') as f:
a = f.read()
split_data += a
lenbuf.append(len(a))
os.remove(os.path.join(input_dir, str(i) + ext))
header = None
if splits == 1 and convert_to_qoi:
output_file_path = os.path.join(input_dir, OUTPUT_FILE_NAME + ext)
else:
header = create_header(width, height, splits, SPLIT_HEIGHT, lenbuf, ext)
output_file_path = os.path.join(input_dir, OUTPUT_FILE_NAME + output_extension)
save_image(output_file_path, header, split_data)
print('Completed', input_filename, '->', os.path.basename(output_file_path))
def convert_image_to_qoi(input_file, height_str):
process_image(input_file, height_str, '.sqoi', convert_to_qoi=True)
def convert_image_to_simg(input_file, height_str):
input_dir, input_filename = os.path.split(input_file)
_, ext = os.path.splitext(input_filename)
output_extension = '.sjpg' if ext.lower() == '.jpg' else '.spng'
process_image(input_file, height_str, output_extension, convert_to_qoi=False)
def convert_image_to_raw(input_file: str) -> None:
"""
Convert an image to raw binary format compatible with LVGL.
Parameters:
- input_file (str): Path to the input image file.
Raises:
- FileNotFoundError: If required scripts are not found.
- subprocess.CalledProcessError: If the external conversion script fails.
- KeyError: If required keys are missing in config_data.
"""
input_dir, input_filename = os.path.split(input_file)
_, ext = os.path.splitext(input_filename)
convert_path = os.path.join(os.path.dirname(input_file), 'lvgl_image_converter')
lvgl_ver_str = config_data.get('lvgl_ver', '9.0.0')
try:
lvgl_version = version.parse(lvgl_ver_str)
except version.InvalidVersion:
print(f'Invalid LVGL version format: {lvgl_ver_str}')
sys.exit(1)
if lvgl_version >= version.parse('9.0.0'):
handle_lvgl_version_v9(
input_file=input_file,
input_dir=input_dir,
input_filename=input_filename,
convert_path=convert_path
)
else:
handle_lvgl_version_v8(
input_file=input_file,
input_dir=input_dir,
input_filename=input_filename,
convert_path=convert_path
)
def pack_assets(config: PackModelsConfig):
"""
Pack models based on the provided configuration.
"""
target_path = config.target_path
assets_include_path = config.include_path
out_file = config.image_file
assets_path = config.assets_path
max_name_len = config.name_length
merged_data = bytearray()
file_info_list = []
skip_files = ['config.json', 'lvgl_image_converter']
file_list = sorted(os.listdir(target_path), key=sort_key)
for filename in file_list:
if filename in skip_files:
continue
file_path = os.path.join(target_path, filename)
file_name = os.path.basename(file_path)
file_size = os.path.getsize(file_path)
try:
img = Image.open(file_path)
width, height = img.size
except Exception as e:
# print("Error:", e)
_, file_extension = os.path.splitext(file_path)
if file_extension.lower() in ['.sjpg', '.spng', '.sqoi']:
offset = 14
with open(file_path, 'rb') as f:
f.seek(offset)
width_bytes = f.read(2)
height_bytes = f.read(2)
width = int.from_bytes(width_bytes, byteorder='little')
height = int.from_bytes(height_bytes, byteorder='little')
else:
width, height = 0, 0
file_info_list.append((file_name, len(merged_data), file_size, width, height))
# Add 0x5A5A prefix to merged_data
merged_data.extend(b'\x5A' * 2)
with open(file_path, 'rb') as bin_file:
bin_data = bin_file.read()
merged_data.extend(bin_data)
total_files = len(file_info_list)
mmap_table = bytearray()
for file_name, offset, file_size, width, height in file_info_list:
if len(file_name) > int(max_name_len):
print(f'\033[1;33mWarn:\033[0m "{file_name}" exceeds {max_name_len} bytes and will be truncated.')
fixed_name = file_name.ljust(int(max_name_len), '\0')[:int(max_name_len)]
mmap_table.extend(fixed_name.encode('utf-8'))
mmap_table.extend(file_size.to_bytes(4, byteorder='little'))
mmap_table.extend(offset.to_bytes(4, byteorder='little'))
mmap_table.extend(width.to_bytes(2, byteorder='little'))
mmap_table.extend(height.to_bytes(2, byteorder='little'))
combined_data = mmap_table + merged_data
combined_checksum = compute_checksum(combined_data)
combined_data_length = len(combined_data).to_bytes(4, byteorder='little')
header_data = total_files.to_bytes(4, byteorder='little') + combined_checksum.to_bytes(4, byteorder='little')
final_data = header_data + combined_data_length + combined_data
with open(out_file, 'wb') as output_bin:
output_bin.write(final_data)
os.makedirs(assets_include_path, exist_ok=True)
current_year = datetime.now().year
asset_name = os.path.basename(assets_path)
file_path = os.path.join(assets_include_path, f'mmap_generate_{asset_name}.h')
with open(file_path, 'w') as output_header:
output_header.write('/*\n')
output_header.write(' * SPDX-FileCopyrightText: 2022-{} Espressif Systems (Shanghai) CO LTD\n'.format(current_year))
output_header.write(' *\n')
output_header.write(' * SPDX-License-Identifier: Apache-2.0\n')
output_header.write(' */\n\n')
output_header.write('/**\n')
output_header.write(' * @file\n')
output_header.write(" * @brief This file was generated by esp_mmap_assets, don't modify it\n")
output_header.write(' */\n\n')
output_header.write('#pragma once\n\n')
output_header.write("#include \"esp_mmap_assets.h\"\n\n")
output_header.write(f'#define MMAP_{asset_name.upper()}_FILES {total_files}\n')
output_header.write(f'#define MMAP_{asset_name.upper()}_CHECKSUM 0x{combined_checksum:04X}\n\n')
output_header.write(f'enum MMAP_{asset_name.upper()}_LISTS {{\n')
for i, (file_name, _, _, _, _) in enumerate(file_info_list):
enum_name = file_name.replace('.', '_')
output_header.write(f' MMAP_{asset_name.upper()}_{enum_name.upper()} = {i}, /*!< {file_name} */\n')
output_header.write('};\n')
print(f'All bin files have been merged into {os.path.basename(out_file)}')
def copy_assets(config: AssetCopyConfig):
"""
Copy assets to target_path based on the provided configuration.
"""
format_tuple = tuple(config.support_format)
assets_path = config.assets_path
target_path = config.target_path
for filename in os.listdir(assets_path):
if any(filename.endswith(suffix) for suffix in format_tuple):
source_file = os.path.join(assets_path, filename)
target_file = os.path.join(target_path, filename)
shutil.copyfile(source_file, target_file)
conversion_map = {
'.jpg': [
(config.sjpg_enable, convert_image_to_simg),
(config.qoi_enable, convert_image_to_qoi),
],
'.png': [
(config.spng_enable, convert_image_to_simg),
(config.qoi_enable, convert_image_to_qoi),
],
}
file_ext = os.path.splitext(filename)[1].lower()
conversions = conversion_map.get(file_ext, [])
converted = False
for enable_flag, convert_func in conversions:
if enable_flag:
convert_func(target_file, config.split_height)
os.remove(target_file)
converted = True
break
if not converted and config.row_enable:
convert_image_to_raw(target_file)
os.remove(target_file)
else:
print(f'No match found for file: {filename}, format_tuple: {format_tuple}')
def process_assets_build(config_data):
assets_path = config_data['assets_path']
image_file = config_data['image_file']
target_path = os.path.dirname(image_file)
include_path = config_data['include_path']
name_length = config_data['name_length']
split_height = config_data['split_height']
support_format = [fmt.strip() for fmt in config_data['support_format'].split(',')]
copy_config = AssetCopyConfig(
assets_path=assets_path,
target_path=target_path,
spng_enable=config_data['support_spng'],
sjpg_enable=config_data['support_sjpg'],
qoi_enable=config_data['support_qoi'],
sqoi_enable=config_data['support_sqoi'],
row_enable=config_data['support_raw'],
support_format=support_format,
split_height=split_height
)
pack_config = PackModelsConfig(
target_path=target_path,
include_path=include_path,
image_file=image_file,
assets_path=assets_path,
name_length=name_length
)
print('--support_format:', support_format)
if '.jpg' in support_format or '.png' in support_format:
print('--support_spng:', copy_config.spng_enable)
print('--support_sjpg:', copy_config.sjpg_enable)
print('--support_qoi:', copy_config.qoi_enable)
print('--support_raw:', copy_config.row_enable)
if copy_config.sqoi_enable:
print('--support_sqoi:', copy_config.sqoi_enable)
if copy_config.spng_enable or copy_config.sjpg_enable or copy_config.sqoi_enable:
print('--split_height:', copy_config.split_height)
if copy_config.row_enable:
print('--lvgl_version:', config_data['lvgl_ver'])
if not os.path.exists(target_path):
os.makedirs(target_path, exist_ok=True)
for filename in os.listdir(target_path):
file_path = os.path.join(target_path, filename)
if os.path.isfile(file_path) or os.path.islink(file_path):
os.unlink(file_path)
elif os.path.isdir(file_path):
shutil.rmtree(file_path)
copy_assets(copy_config)
pack_assets(pack_config)
total_size = os.path.getsize(os.path.join(target_path, image_file))
recommended_size = math.ceil(total_size / 1024)
partition_size = math.ceil(int(config_data['assets_size'], 16))
print(f'{"Total size:":<30} {GREEN}{total_size / 1024:>8.2f}K ({total_size}){RESET}')
print(f'{"Partition size:":<30} {GREEN}{partition_size / 1024:>8.2f}K ({partition_size}){RESET}')
if int(config_data['assets_size'], 16) <= total_size:
print(f'Recommended partition size: {GREEN}{recommended_size}K{RESET}')
print(f'{RED}Error:Binary size exceeds partition size.{RESET}')
sys.exit(1)
def process_assets_merge(config_data):
app_bin_path = config_data['app_bin_path']
image_file = config_data['image_file']
target_path = os.path.dirname(image_file)
combined_bin_path = os.path.join(target_path, 'combined.bin')
append_bin_path = os.path.join(target_path, image_file)
app_size = os.path.getsize(app_bin_path)
asset_size = os.path.getsize(append_bin_path)
total_size = asset_size + app_size
recommended_size = math.ceil(total_size / 1024)
partition_size = math.ceil(int(config_data['assets_size'], 16))
print(f'{"Asset size:":<30} {GREEN}{asset_size / 1024:>8.2f}K ({asset_size}){RESET}')
print(f'{"App size:":<30} {GREEN}{app_size / 1024:>8.2f}K ({app_size}){RESET}')
print(f'{"Total size:":<30} {GREEN}{total_size / 1024:>8.2f}K ({total_size}){RESET}')
print(f'{"Partition size:":<30} {GREEN}{partition_size / 1024:>8.2f}K ({partition_size}){RESET}')
if total_size > partition_size:
print(f'Recommended partition size: {GREEN}{recommended_size}K{RESET}')
print(f'{RED}Error:Binary size exceeds partition size.{RESET}')
sys.exit(1)
with open(combined_bin_path, 'wb') as combined_bin:
with open(app_bin_path, 'rb') as app_bin:
combined_bin.write(app_bin.read())
with open(append_bin_path, 'rb') as img_bin:
combined_bin.write(img_bin.read())
shutil.move(combined_bin_path, app_bin_path)
print(f'Append bin created: {os.path.basename(app_bin_path)}')
if __name__ == '__main__':
parser = argparse.ArgumentParser(description='Move and Pack assets.')
parser.add_argument('--config', required=True, help='Path to the configuration file')
parser.add_argument('--merge', action='store_true', help='Merge assets with app binary')
args = parser.parse_args()
with open(args.config, 'r') as f:
config_data = json.load(f)
if args.merge:
process_assets_merge(config_data)
else:
process_assets_build(config_data)