Ansible playbook to add users with ssh keys and sudo
Запись создана 3 декабря, 2024
# yamllint disable rule:line-length --- - name: Add admin users hosts: all gather_facts: true become: true tasks: - name: Create account ansible.builtin.user: name: "{{ item.name }}" groups: "sudo" shell: /bin/bash append: true with_items: "{{ users }}" - name: Set authorized key taken from file ansible.posix.authorized_key: user: "{{ item.name }}" exclusive: true key: "{{ item.ssh_pub_key }}" with_items: "{{ users }}" - name: Add user to sudoers community.general.sudoers: name: "{{ item.name }}" state: present user: "{{ item.name }}" commands: 'ALL' nopassword: true with_items: "{{ users }}" vars: users: - name: mihael ssh_pub_key: "ssh-rsa AAAAB......" - name: maria ssh_pub_key: "ssh-rsa AAAAB......"
» Запись из раздела Несортированное | Комментировать
Sentry cleanup
Запись создана 2 декабря, 2024
To cleanup self hosted Sentry you can use a following script:
docker exec -it sentry-self-hosted-clickhouse-1 clickhouse-client -q 'TRUNCATE default.spans_local' docker exec -it sentry-self-hosted-clickhouse-1 clickhouse-client -q 'TRUNCATE default.generic_metric_distributions_aggregated_local' docker exec -it sentry-self-hosted-clickhouse-1 clickhouse-client -q 'TRUNCATE default.generic_metric_distributions_raw_local' docker exec -it sentry-self-hosted-clickhouse-1 clickhouse-client -q 'TRUNCATE system.metric_log' docker exec -it sentry-self-hosted-clickhouse-1 clickhouse-client -q 'TRUNCATE default.transactions_local' docker exec -it sentry-self-hosted-clickhouse-1 clickhouse-client -q 'TRUNCATE default.outcomes_raw_local' docker exec --tty -u postgres sentry-self-hosted-postgres-1 psql -c "TRUNCATE TABLE nodestore_node;" docker exec --tty -u postgres sentry-self-hosted-postgres-1 psql -c "VACUUM FULL" docker exec -it sentry-self-hosted-redis-1 redis-cli FLUSHALL
If you want to delete events older than 3 days
docker exec --tty -u postgres sentry-self-hosted-postgres-1 psql -c "DELETE FROM nodestore_node WHERE "timestamp" < NOW()-INTERVAL '3 day';" docker exec --tty -u postgres sentry-self-hosted-postgres-1 psql -c "vacuum full nodestore_node;"
To cleanup historey older than 7 days
docker exec -it sentry-self-hosted-worker-1 sentry cleanup --days 7
Some times you can get a problem with terrible grow of Kafka volume, than you can recreate it:
cd /opt/sentry/install/self-hosted-23.7.1/ docker compose down --volumes docker volume rm sentry-kafka docker volume rm sentry-zookeeper ./install.sh docker compose up -d
» Запись из раздела Несортированное | Комментировать
«firstBit.Сервер лицензий 2» Ubuntu Linux
Запись создана 12 ноября, 2024
/etc/systemd/system/firstBit.service
[Unit] Description=firstBit.LicenseServer After=syslog.target After=network.target [Service] Type=simple Restart=on-failure PIDFile=/run/firstBit.pid KillMode=control-group ExecStart=/opt/firstBit.LicenseServer/linux/licenseserver --run --allow-ui-from-ip=* ExecStop=/bin/sh -c '[ -n "$1" ] && kill -s TERM "$1"' -- "$MAINPID" RestartSec=10s User=root Group=root LimitNOFILE=8192 [Install] WantedBy=multi-user.target
mkdir /opt/firstBit.LicenseServer/ cd /opt/firstBit.LicenseServer/ wget https://static.1cbit.online/updates/license-server/download/LicenseServer-v2.7z 7za x LicenseServer-v2.7z chmod +x /opt/firstBit.LicenseServer/linux/licenseserver systemctl daemon-reload systemctl start firstBit.service
» Запись из раздела Несортированное | Комментировать
HPE Smart Array E208i-a SR Gen10 firmware update
Запись создана 24 октября, 2024
No drives were found in this system.You may need to reboot
You can obtain an rpm package here: https://downloads.linux.hpe.com/SDR/repo/spp-gen10/redhat/8/x86_64/current/
If you use RedHat based distro just install the rpm, otherwise extract it:
rpm2cpio firmware-smartarray-f7c07bdbbd-4.11-1.1.x86_64.rpm | cpio -idmv
and run an update process:
./usr/lib/x86_64-linux-gnu/firmware-smartarray-f7c07bdbbd-4.11-1.1/setup Supplemental Update / Online ROM Flash Component for Linux (x64) - HPE Smart Array P408i-p, P408e-p, P408i-a, E208i-p, E208e-p, E208i-a, P816i-a SR Gen10 (4.11), searching... 1) HPE Smart Array E208i-a SR Gen10 in Slot 0 (3.53) Select which devices to flash [#,#-#,(A)ll,(N)one]> A Flashing HPE Smart Array E208i-a SR Gen10 in Slot 0 [ 3.53 -> 4.11 ] Deferred flashes will be performed on next system reboot ============ Summary ============ Smart Component Finished Summary Messages ================ Reboot needed to activate 1 new FW image Exit Status: 1 Deferred flashes will be performed on next system reboot A reboot is required to complete update.
update firmware to 7.11 for correct work with iLo
» Запись из раздела Linux | Комментировать
GrayLog: Hostname datanode not verified
Запись создана 23 октября, 2024
After updating a graylog we get an errors like:
Unable to retrieve version from Elasticsearch node: Hostname datanode not verified
Host name 'datanode' does not match the certificate subject provided
This can be fixed disabling ssl/tls betwen containers
put into .env this string:
GRAYLOG_DATANODE_INSECURE_STARTUP=true
then restart containers
docker compose stop
docker compose up -d
» Запись из раздела Несортированное | Комментировать
ProxMox Qemu create Ubuntu template
Запись создана 5 апреля, 2024
wget https://cloud-images.ubuntu.com/releases/jammy/release/ubuntu-22.04-server-cloudimg-amd64.img mv ubuntu-22.04-server-cloudimg-amd64.img ubuntu-22.04-server-cloudimg-amd64.qcow2 qemu-img resize ubuntu-22.04-server-cloudimg-amd64.qcow2 10G qm create 9000 \ --name ubuntu22 \ --bootdisk virtio0 \ --ostype l26 \ --sockets 1 \ --cores 2 \ --memory 1024 \ --scsihw virtio-scsi-single \ --onboot yes \ --serial0 socket \ --vga serial0 \ --net0 virtio,bridge=vmbr0 \ --agent 1 \ --ide2 local-zfs:cloudinit \ --virtio0 local-zfs:0,import-from=/root/ubuntu-22.04-server-cloudimg-amd64.qcow2 qm set 9000 --ipconfig0 ip=dhcp
Enable snippets store on ProxMox:
pvesm set local --content images,rootdir,vztmpl,backup,iso,snippets
cat > /var/lib/vz/snippets/9000.yaml << EOF #cloud-config preserve_hostname: true users: - default - name: shakirov gecos: Artur Shakirov shell: /bin/bash groups: sudo sudo: ALL=(ALL) NOPASSWD:ALL passwd: $PASSWORD_HASH_TAKE_IT_FROM_YOUR_/etc/shadow_ lock_passwd: false ssh_authorized_keys: - ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC5o74MtkmdOjSIvqvV+z0vtB65KE2EHLk8FGWqIqOxVg2nAvHNKS7Zy255c+mAWOS+sEJUsZMFlxaIsqS7f1nf/3TMftlnlRH3WNdoh2QP7lsEccpRrPymhD7+ZkouC0FosqciGEKGo0sGXnnyLnNajYp01UHmgsALH5vEsK9xXeiTtinvEDanI4QrI9U4bCoIEGboKeQPhvk7355x7hV05RBpq3fud/No+rbiD9PZxUQCI/l1H6GWtLbWE/LaGxS1CmBb1Rw3Ea5agJ5yX24F+Ey19CnKk8WsW649AI4HO4QdTKE7zwIEWW46ONIAEnpV0LkYmJbfBUCaKo/8g6I3 - ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIPQ187Jo6t/Wxxgs73NnEWc+OGebbruOY/DfmxemFX2C shakirov@shakirov write_files: - path: /etc/sudoers.d/cloud-init content: | Defaults !requiretty package_update: true package_upgrade: true packages: - qemu-guest-agent - pwgen - nmap - htop - iftop runcmd: - sed -i -e 's/^GSSAPIAuthentication yes/GSSAPIAuthentication no/g' /etc/ssh/sshd_config - sed -i -e 's/^PasswordAuthentication no/PasswordAuthentication yes/g' /etc/ssh/sshd_config - [ systemctl, enable, qemu-guest-agent ] - [ systemctl, start, qemu-guest-agent ] EOF
Apply Cloud-init to VM
qm set 9000 --cicustom "user=local:snippets/9000.yaml"
Now we have imported cloud image and can prepare it. At this stage you can customize somehow your image
qm start 9000 && qm terminal 9000 sudo -i cat /dev/null > /etc/machine-id cloud-init clean history -c shutdown -h now
Convert our VM to template
qm template 9000
And now we can create a VM from template:
qm clone 9000 107 --full --name mgmt
» Запись из раздела Proxmox VE | Комментировать
Sentry: All events is empty
Запись создана 25 марта, 2024
If you have a nginx revers proxy on your self-hosted Sentry you can get an issue with empty «All events»
this can be fixed by adding to nginx config
proxy_buffer_size 128k;
proxy_buffers 16 256k;
if you need a complete config, here it is:
Читать дальше
» Запись из раздела Linux, Nginx | Комментировать
ProxMox migration fails: Host key verification failed.
Запись создана 22 марта, 2024
If you get an error, on migrating VM from one ProxMox to another, like this:
# /usr/bin/ssh -e none -o 'BatchMode=yes' -o 'HostKeyAlias=pve-n23' root@10.10.10.10 /bin/true Host key verification failed. ERROR: migration aborted (duration 00:00:01): Can't connect to destination address using public key TASK ERROR: migration aborted
And you have FreeIPA installed on proxmoxes, you can fix it (temporary) by
ssh -o 'HostKeyAlias=pve-n23' root@10.10.10.10 /bin/true
or permanently, by commenting string
#GlobalKnownHostsFile /var/lib/sss/pubconf/known_hosts
in /etc/ssh/ssh_config.d/04-ipa.conf
» Запись из раздела Linux, Proxmox VE | Комментировать
LVM resize HOWTO
Запись создана 15 марта, 2024
We have an virtual machine with 165Gb disk, and added extra 10Gb. Now we need to extend filesystem inside a VM
lets check trat extra 10Gb is available in VM
# fdisk -l /dev/sdb Disk /dev/sdb: 175.2 GiB, 187924742144 bytes, 367040512 sectors
Checkout the physical volume size, it’s 165Gbyte
# pvdisplay --- Physical volume --- PV Name /dev/sdb VG Name data PV Size <165.02 GiB / not usable 2.00 MiB
let’s resize it
# pvresize /dev/sdb Physical volume "/dev/sdb" changed 1 physical volume(s) resized or updated / 0 physical volume(s) not resized
and check again. ok we see that it extended
# pvdisplay --- Physical volume --- PV Name /dev/sdb VG Name data PV Size <175.02 GiB / not usable 2.00 MiB
logical volume is extended to, and have Free Size 10GiB
# vgdisplay --- Volume group --- VG Name data VG Size <175.02 GiB Alloc PE / Size 42244 / <165.02 GiB Free PE / Size 2560 / 10.00 GiB
now we need to extend logical volume
# lvextend -l +100%FREE /dev/mapper/data-storage Size of logical volume data/storage changed from <165.02 GiB (42244 extents) to <175.02 GiB (44804 extents). Logical volume data/storage successfully resized.
and resize a file system (in my case it’s xfs)
# xfs_growfs /dev/mapper/data-storage
And now we see that filesystem is growed
# df -h /opt/docker/ Filesystem Size Used Avail Use% Mounted on /dev/mapper/data-storage 175G 147G 29G 84% /opt/docker
» Запись из раздела Linux | Комментировать
Sangoma Linux after migration from VMWare ESXi to ProxmoxVE fails to boot
Запись создана 13 марта, 2024
After migration Sangoma Linux (FreePBX on CentOS7) from vmware to proxmox
qm importovf 7804 PBX002.ovf local-zfs
OS fails to boot with error:
Could not boot /dev/SangomaVG/root does not exist /dev/SangomaVG/swaplv1 does not exist /dev/mapper/SangomaVG-root does not exist
What do we need?
1. add network adapter, chose model «VMware vmxnet3» and set previously used MAC-address
2. detach hard disk and attach it as sata
3. boot the VM from CentOS-7-minimal.iso and
mount --bind /run /mnt/sysimage/run systemctl start multipathd.service chroot /mnt/sysimage dracut --regenerate-all --force
» Запись из раздела CentOS Linux, KVM, Proxmox VE, virtualization, VoIP | Комментировать