microk8s disk consumption

Запись создана 12 февраля, 2025

24G	/var/snap/microk8s/common/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots

add this strings to /var/snap/microk8s/current/args/kubelet

--image-gc-high-threshold=50
--image-gc-low-threshold=40
--maximum-dead-containers=0

and restart

snap restart microk8s

and you will see

9.1G	/var/snap/microk8s/common/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots

» Запись из раздела k8s, Linux | Комментировать

OpenVPN: Real Speedup [DCO]

Запись создана 30 января, 2025

We have a test bench of two servers, connected with physical 10G ports

Let’s see what iperf3 will show

[  5]   0.00-1.00   sec  1.08 GBytes  9.29 Gbits/sec                  
[  5]   1.00-2.00   sec  1.08 GBytes  9.30 Gbits/sec                  
[  5]   2.00-3.00   sec  1.08 GBytes  9.29 Gbits/sec  

Now let’s put OpenVPN between those two and repeat iperf3

[  5]   0.00-1.00   sec  82.0 MBytes   687 Mbits/sec                  
[  5]   1.00-2.00   sec  94.0 MBytes   789 Mbits/sec                  
[  5]   2.00-3.00   sec  96.4 MBytes   808 Mbits/sec                  

now, let’s enable DCO and check the speed again

[  5]   0.00-1.00   sec   206 MBytes  1.73 Gbits/sec                  
[  5]   1.00-2.00   sec   227 MBytes  1.90 Gbits/sec                  
[  5]   2.00-3.00   sec   213 MBytes  1.79 Gbits/sec                  
[  5]   3.00-4.00   sec   159 MBytes  1.33 Gbits/sec                  
[  5]   4.00-5.00   sec   160 MBytes  1.35 Gbits/sec

What a magic! What else we can do? if it’s an virtual environment — enable AES+ option to CPU and check again

[  5]   0.00-1.00   sec   227 MBytes  1.90 Gbits/sec                  
[  5]   1.00-2.00   sec   238 MBytes  2.00 Gbits/sec                  
[  5]   2.00-3.00   sec   234 MBytes  1.96 Gbits/sec                  
[  5]   3.00-4.00   sec   233 MBytes  1.95 Gbits/sec    

Now is a question you have is «HOW?» let me show you a few steps and my config files!

On both side server/client

apt install openvpn-dco-dkms
echo 'ovpn-dco-v2' >> /etc/modules-load.d/modules.conf

/etc/openvpn/client/test.conf

client
remote test1.srv.in 1194
dev tun
proto udp
persist-key
persist-tun
tls-client
script-security 2
cipher AES-256-GCM
auth SHA256
data-ciphers AES-256-GCM
auth-nocache
remote-cert-tls server

/etc/openvpn/server/server.conf

proto udp
 port 1194
  dev tun
ifconfig 172.16.45.1 255.255.255.0
server 172.16.45.0 255.255.255.0
push "route-metric 100"
keepalive 3 10
 user nobody
group nogroup
persist-key
persist-tun
status server-openvpn-status.log
   log server-openvpn.log
  verb 2
client-to-client
client-config-dir /etc/openvpn/ccd
topology subnet
cipher AES-256-GCM
auth SHA256
data-ciphers AES-256-GCM
fast-io
sndbuf 393216
rcvbuf 393216
push "sndbuf 393216"
push "rcvbuf 393216"
txqueuelen 4000
tun-mtu 1420

DCO has a several limitations, you can read about it here https://docs.netgate.com/pfsense/en/latest/vpn/openvpn/dco.html

» Запись из раздела Linux, vpn | Комментировать

Ansible playbook to add users with ssh keys and sudo

Запись создана 3 декабря, 2024

# yamllint disable rule:line-length
---
- name: Add admin users
  hosts: all
  gather_facts: true
  become: true
  tasks:
    - name: Create account
      ansible.builtin.user:
        name: "{{ item.name }}"
        groups: "sudo"
        shell: /bin/bash
        append: true
      with_items: "{{ users }}"
    - name: Set authorized key taken from file
      ansible.posix.authorized_key:
        user: "{{ item.name }}"
        exclusive: true
        key: "{{ item.ssh_pub_key }}"
      with_items: "{{ users }}"
    - name: Add  user to sudoers
      community.general.sudoers:
        name: "{{ item.name }}"
        state: present
        user: "{{ item.name }}"
        commands: 'ALL'
        nopassword: true
      with_items: "{{ users }}"
  vars:
    users:
      - name: mihael
        ssh_pub_key: "ssh-rsa AAAAB......"
      - name: maria
        ssh_pub_key: "ssh-rsa AAAAB......"

» Запись из раздела Несортированное | Комментировать

Sentry cleanup

Запись создана 2 декабря, 2024

To cleanup self hosted Sentry you can use a following script:

docker exec  -it sentry-self-hosted-clickhouse-1 clickhouse-client -q 'TRUNCATE default.spans_local'
docker exec  -it sentry-self-hosted-clickhouse-1 clickhouse-client -q 'TRUNCATE default.generic_metric_distributions_aggregated_local'
docker exec  -it sentry-self-hosted-clickhouse-1 clickhouse-client -q 'TRUNCATE default.generic_metric_distributions_raw_local'
docker exec  -it sentry-self-hosted-clickhouse-1 clickhouse-client -q 'TRUNCATE system.metric_log'
docker exec  -it sentry-self-hosted-clickhouse-1 clickhouse-client -q 'TRUNCATE default.transactions_local'
docker exec  -it sentry-self-hosted-clickhouse-1 clickhouse-client -q 'TRUNCATE default.outcomes_raw_local'
docker exec --tty -u postgres sentry-self-hosted-postgres-1 psql -c "TRUNCATE TABLE nodestore_node;"
docker exec --tty -u postgres sentry-self-hosted-postgres-1 psql -c "VACUUM FULL"
docker exec -it sentry-self-hosted-redis-1 redis-cli FLUSHALL

If you want to delete events older than 3 days

docker exec --tty -u postgres sentry-self-hosted-postgres-1 psql -c "DELETE FROM nodestore_node WHERE "timestamp" < NOW()-INTERVAL '3 day';"
docker exec --tty -u postgres sentry-self-hosted-postgres-1 psql -c "vacuum full nodestore_node;"

To cleanup historey older than 7 days

docker exec -it sentry-self-hosted-worker-1 
sentry cleanup --days 7

Some times you can get a problem with terrible grow of Kafka volume, than you can recreate it:

cd /opt/sentry/install/self-hosted-23.7.1/
docker compose down --volumes
docker volume rm sentry-kafka
docker volume rm sentry-zookeeper
./install.sh
docker compose up -d

» Запись из раздела Несортированное | Комментировать

«firstBit.Сервер лицензий 2» Ubuntu Linux

Запись создана 12 ноября, 2024

/etc/systemd/system/firstBit.service

[Unit]
Description=firstBit.LicenseServer
After=syslog.target
After=network.target

[Service]
Type=simple
Restart=on-failure
PIDFile=/run/firstBit.pid
KillMode=control-group
ExecStart=/opt/firstBit.LicenseServer/linux/licenseserver --run --allow-ui-from-ip=*
ExecStop=/bin/sh -c '[ -n "$1" ] && kill -s TERM "$1"' -- "$MAINPID"
RestartSec=10s
User=root
Group=root
LimitNOFILE=8192

[Install]
WantedBy=multi-user.target
mkdir /opt/firstBit.LicenseServer/
cd /opt/firstBit.LicenseServer/
wget https://static.1cbit.online/updates/license-server/download/LicenseServer-v2.7z
7za x LicenseServer-v2.7z
chmod +x /opt/firstBit.LicenseServer/linux/licenseserver
systemctl daemon-reload 
systemctl start firstBit.service

» Запись из раздела Несортированное | Комментировать

HPE Smart Array E208i-a SR Gen10 firmware update

Запись создана 24 октября, 2024

No drives were found in this system.You may need to reboot

You can obtain an rpm package here: https://downloads.linux.hpe.com/SDR/repo/spp-gen10/redhat/8/x86_64/current/

If you use RedHat based distro just install the rpm, otherwise extract it:

rpm2cpio firmware-smartarray-f7c07bdbbd-4.11-1.1.x86_64.rpm | cpio -idmv

and run an update process:

./usr/lib/x86_64-linux-gnu/firmware-smartarray-f7c07bdbbd-4.11-1.1/setup
Supplemental Update / Online ROM Flash Component for Linux (x64) - HPE Smart Array P408i-p, P408e-p, P408i-a, E208i-p, E208e-p, E208i-a, P816i-a SR Gen10 (4.11), searching...
1) HPE Smart Array E208i-a SR Gen10 in Slot 0 (3.53)
Select which devices to flash [#,#-#,(A)ll,(N)one]&gt; A
Flashing HPE Smart Array E208i-a SR Gen10 in Slot 0 [ 3.53 -&gt; 4.11 ]
Deferred flashes will be performed on next system reboot
============ Summary ============
Smart Component Finished
 
Summary Messages
================
Reboot needed to activate 1 new FW image
 
Exit Status: 1
Deferred flashes will be performed on next system reboot
A reboot is required to complete update.

update firmware to 7.11 for correct work with iLo

» Запись из раздела Linux | Комментировать

GrayLog: Hostname datanode not verified

Запись создана 23 октября, 2024

After updating a graylog we get an errors like:

Unable to retrieve version from Elasticsearch node: Hostname datanode not verified 
Host name 'datanode' does not match the certificate subject provided 

This can be fixed disabling ssl/tls betwen containers

put into .env this string:

GRAYLOG_DATANODE_INSECURE_STARTUP=true

then restart containers

docker compose stop
docker compose up -d

» Запись из раздела Несортированное | Комментировать

ProxMox Qemu create Ubuntu template

Запись создана 5 апреля, 2024

wget https://cloud-images.ubuntu.com/releases/jammy/release/ubuntu-22.04-server-cloudimg-amd64.img
mv ubuntu-22.04-server-cloudimg-amd64.img ubuntu-22.04-server-cloudimg-amd64.qcow2
qemu-img resize ubuntu-22.04-server-cloudimg-amd64.qcow2 10G
 
qm create  9000 \
--name ubuntu22 \
--bootdisk virtio0 \
--ostype l26 \
--sockets 1  \
--cores 2 \
--memory 1024 \
--scsihw virtio-scsi-single \
--onboot yes \
--serial0 socket \
--vga serial0 \
--net0 virtio,bridge=vmbr0 \
--agent 1 \
--ide2 local-zfs:cloudinit \
--virtio0 local-zfs:0,import-from=/root/ubuntu-22.04-server-cloudimg-amd64.qcow2
 
qm set 9000 --ipconfig0 ip=dhcp

Enable snippets store on ProxMox:

pvesm set local --content images,rootdir,vztmpl,backup,iso,snippets
cat > /var/lib/vz/snippets/9000.yaml << EOF
#cloud-config
preserve_hostname: true

users:
  - default
  - name: shakirov
    gecos: Artur Shakirov
    shell: /bin/bash
    groups: sudo
    sudo: ALL=(ALL) NOPASSWD:ALL
    passwd: $PASSWORD_HASH_TAKE_IT_FROM_YOUR_/etc/shadow_
    lock_passwd: false
    ssh_authorized_keys:
      - ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC5o74MtkmdOjSIvqvV+z0vtB65KE2EHLk8FGWqIqOxVg2nAvHNKS7Zy255c+mAWOS+sEJUsZMFlxaIsqS7f1nf/3TMftlnlRH3WNdoh2QP7lsEccpRrPymhD7+ZkouC0FosqciGEKGo0sGXnnyLnNajYp01UHmgsALH5vEsK9xXeiTtinvEDanI4QrI9U4bCoIEGboKeQPhvk7355x7hV05RBpq3fud/No+rbiD9PZxUQCI/l1H6GWtLbWE/LaGxS1CmBb1Rw3Ea5agJ5yX24F+Ey19CnKk8WsW649AI4HO4QdTKE7zwIEWW46ONIAEnpV0LkYmJbfBUCaKo/8g6I3
      - ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIPQ187Jo6t/Wxxgs73NnEWc+OGebbruOY/DfmxemFX2C shakirov@shakirov
 
write_files:
  - path: /etc/sudoers.d/cloud-init
    content: |
      Defaults !requiretty
 
package_update: true
package_upgrade: true
packages:
  - qemu-guest-agent
  - pwgen
  - nmap
  - htop
  - iftop
 
runcmd:
  - sed -i -e 's/^GSSAPIAuthentication yes/GSSAPIAuthentication no/g' /etc/ssh/sshd_config
  - sed -i -e 's/^PasswordAuthentication no/PasswordAuthentication yes/g' /etc/ssh/sshd_config
  - [ systemctl, enable, qemu-guest-agent ]
  - [ systemctl, start, qemu-guest-agent ]
EOF

Apply Cloud-init to VM

qm set 9000  --cicustom "user=local:snippets/9000.yaml"

Now we have imported cloud image and can prepare it. At this stage you can customize somehow your image

qm start 9000 && qm terminal 9000
sudo -i
cat /dev/null > /etc/machine-id
cloud-init clean
history -c
shutdown -h now

Convert our VM to template

qm template 9000

And now we can create a VM from template:

qm clone 9000 107 --full --name mgmt

» Запись из раздела Proxmox VE | Комментировать

Sentry: All events is empty

Запись создана 25 марта, 2024

If you have a nginx revers proxy on your self-hosted Sentry you can get an issue with empty «All events»

this can be fixed by adding to nginx config

    proxy_buffer_size          128k;
    proxy_buffers    16 256k;

if you need a complete config, here it is:
Читать дальше

» Запись из раздела Linux, Nginx | Комментировать

ProxMox migration fails: Host key verification failed.

Запись создана 22 марта, 2024

If you get an error, on migrating VM from one ProxMox to another, like this:

# /usr/bin/ssh -e none -o 'BatchMode=yes' -o 'HostKeyAlias=pve-n23' root@10.10.10.10 /bin/true
Host key verification failed.
ERROR: migration aborted (duration 00:00:01): Can't connect to destination address using public key
TASK ERROR: migration aborted

And you have FreeIPA installed on proxmoxes, you can fix it (temporary) by

 ssh -o 'HostKeyAlias=pve-n23' root@10.10.10.10 /bin/true

or permanently, by commenting string

 #GlobalKnownHostsFile /var/lib/sss/pubconf/known_hosts

in /etc/ssh/ssh_config.d/04-ipa.conf

» Запись из раздела Linux, Proxmox VE | Комментировать

следующая страница »