View previous topic :: View next topic |
Author |
Message |
vcmota Guru
Joined: 19 Jun 2017 Posts: 377
|
Posted: Wed May 04, 2022 12:02 pm Post subject: [SOLVED] Cant bring back my lxd virtual machine... |
|
|
Yesterday my ubuntu virtual machine that I run as a lxd guest inside my gentoo machine asked for a upgrade into the newer ubuntu vesion. It occurs that in the middle of the update the virtual machine froze, and I simply closed it "by force". The problem now is that I cant restart the machine. This is what I get when I try:
Code: |
~> lxc list
+------------+---------+------+------+-----------------+-----------+
| NAME | STATE | IPV4 | IPV6 | TYPE | SNAPSHOTS |
+------------+---------+------+------+-----------------+-----------+
| ubuntu-lts | STOPPED | | | VIRTUAL-MACHINE | 0 |
+------------+---------+------+------+-----------------+-----------+
~> lxc start ubuntu-lts
Error: write /var/lib/lxd/virtual-machines/ubuntu-lts/config/server.crt: no space left on device
Try `lxc info --show-log ubuntu-lts` for more info
~> lxc info --show-log ubuntu-lts
Name: ubuntu-lts
Location: none
Remote: unix://
Arquitetura: x86_64
Criado: 2022/01/18 22:22 -03
Status: Stopped
Type: virtual-machine
Profiles: default
Error: open /var/log/lxd/ubuntu-lts/qemu.log: no such file or directory
~>
|
Because of the first error I though it could be a problem of disk space. So I tried to increase my pool volume with the following commands:
Code: |
~> history | grep storage | grep set
8585 [2022-05-04 07:57:29] lxc storage set mypool volume.size 70GB
8587 [2022-05-04 07:58:38] lxc storage set mypool size 70GB
8614 [2022-05-04 08:51:58] history | grep storage | grep set
~>
|
The commands have indeed increased my pool volume size:
Code: |
~> lxc storage list
+---------+-------------+--------+------------------------------------+---------+
| NAME | DESCRIPTION | DRIVER | SOURCE | USED BY |
+---------+-------------+--------+------------------------------------+---------+
| default | | dir | /var/lib/lxd/storage-pools/default | 1 |
+---------+-------------+--------+------------------------------------+---------+
| mypool | | btrfs | /var/lib/lxd/disks/mypool.img | 1 |
+---------+-------------+--------+------------------------------------+---------+
~> lxc storage show mypool
config:
size: 70GB
source: /var/lib/lxd/disks/mypool.img
volume.size: 70GB
description: ""
name: mypool
driver: btrfs
used_by:
- /1.0/instances/ubuntu-lts
status: Created
locations:
- none
~>
|
However the error message persists. Also, I notice that even after rebooting the host and restarting lxd, the increase in the pool size is not reflected in the partitions that the lxc virtual machine uses. As you may all see they are still at around 50GB (their previous size), and are still full:
Code: |
~> df -h
Sist. Arq. Tam. Usado Disp. Uso% Montado em
devtmpfs 10M 0 10M 0% /dev
tmpfs 16G 0 16G 0% /dev/shm
tmpfs 4,0G 1,5M 4,0G 1% /run
/dev/mapper/vg0-lvol1 405G 266G 119G 70% /
cgroup_root 10M 0 10M 0% /sys/fs/cgroup
/dev/mapper/vg1-lvol0 916G 433G 437G 50% /archive
tmpfs 12G 0 12G 0% /var/tmp/portage
tmpfs 4,0G 4,0K 4,0G 1% /tmp
tmpfs 4,0G 4,0K 4,0G 1% /run/lock
tmpfs 3,2G 8,0K 3,2G 1% /run/user/1000
tmpfs 100K 0 100K 0% /var/lib/lxd/shmounts
tmpfs 100K 0 100K 0% /var/lib/lxd/devlxd
/dev/loop0 47G 46G 0 100% /var/lib/lxd/storage-pools/mypool
~>
|
Considering the output above, perhaps mypool is only the mount point, and before increasing it I should have increased /dev/loop0, but I have no idea how to do it. That is so because /dev/loop0 appears to be a real device. Also, if it is a real disk, I should at least be able to mount it elsewhere and extract my files...
Thank you all for your help.
Last edited by vcmota on Thu Apr 18, 2024 5:21 pm; edited 1 time in total |
|
Back to top |
|
|
alamahant Advocate
Joined: 23 Mar 2019 Posts: 3916
|
Posted: Wed May 04, 2022 2:32 pm Post subject: |
|
|
How did you create your btrfs pool for lxd?
I see
Code: |
| mypool | | btrfs | /var/lib/lxd/disks/mypool.img | 1 |
|
I dont get it .
In my case
Code: |
lxc storage list
+-------------+-------------+--------+----------------------------------------------+---------+
| NAME | DESCRIPTION | DRIVER | SOURCE | USED BY |
+-------------+-------------+--------+----------------------------------------------+---------+
| btrfs_pool0 | | btrfs | /dev/disk/by-id/wwn-0x500a0751e206a7b4-part9 | 19 |
+-------------+-------------+--------+----------------------------------------------+---------+
| default | | dir | /var/lib/lxd/storage-pools/default | 1 |
+-------------+-------------+--------+----------------------------------------------+---------+
|
You should have created a partition and assigned it to an lxd btrfs pool.
Something like
Code: |
lxc storage create mypool btrfs source=/dev/disk/by-uuid/<xxxxxxxxxxxxx> size=<size>
|
But i see mypool.img file in your case.
I would first move the ubuntu guest to default
Code: |
lxc move ubuntu ubuntu-default -s default
###then create an unformatted partition by gdisk or similar and....
lxc storage delete mypool
lxc storage create mypool btrfs source=/dev/disk/by-uuid/<xxxxxxxxxxxxxxx> size=<size>
lxc move ubuntu-default ubuntu -s mypool
|
You can get UUID by
blkid /dev/<new-partition> _________________
|
|
Back to top |
|
|
Hu Administrator
Joined: 06 Mar 2007 Posts: 22634
|
Posted: Wed May 04, 2022 3:04 pm Post subject: Re: Cant bring back my lxd virtual machine... |
|
|
vcmota wrote: | It occurs that in the middle of the update the virtual machine froze, | If I recall correctly, this is normal and expected when the host storage for the guest's block device is exhausted. You might have been able to recover and unfreeze the machine by provisioning more storage, then directing the hypervisor to resume the guest. You might have been able to avoid the hang entirely by provisioning enough storage that the guest could fill its virtual disk without exhausting the host's storage for that virtual disk. vcmota wrote: | and I simply closed it "by force". | This is almost always a bad idea, and is especially bad in the middle of a system upgrade. Even once you get the guest to start, you may find it to be corrupted by the abrupt termination. vcmota wrote: | Considering the output above, perhaps mypool is only the mount point, and before increasing it I should have increased /dev/loop0, but I have no idea how to do it. That is so because /dev/loop0 appears to be a real device. Also, if it is a real disk, I should at least be able to mount it elsewhere and extract my files... | This is not lxc-specific. Loop devices are used to make filesystem files act as block devices, for dealing with cases where the consumer wants a block device instead. You can grow loop0 by detaching it from the underlying file, growing that file, then attaching it again. You could also grow the file and then force the loop system to discover the change. Either way, that will make the block device bigger, but then you will need to grow the filesystem on that block device. |
|
Back to top |
|
|
vcmota Guru
Joined: 19 Jun 2017 Posts: 377
|
Posted: Wed May 04, 2022 4:23 pm Post subject: |
|
|
Thank you alamahant for your reply.
alamahant wrote: | How did you create your btrfs pool for lxd?
I see
Code: |
| mypool | | btrfs | /var/lib/lxd/disks/mypool.img | 1 |
|
I dont get it .
In my case
Code: |
lxc storage list
+-------------+-------------+--------+----------------------------------------------+---------+
| NAME | DESCRIPTION | DRIVER | SOURCE | USED BY |
+-------------+-------------+--------+----------------------------------------------+---------+
| btrfs_pool0 | | btrfs | /dev/disk/by-id/wwn-0x500a0751e206a7b4-part9 | 19 |
+-------------+-------------+--------+----------------------------------------------+---------+
| default | | dir | /var/lib/lxd/storage-pools/default | 1 |
+-------------+-------------+--------+----------------------------------------------+---------+
|
You should have created a partition and assigned it to an lxd btrfs pool.
Something like
Code: |
lxc storage create mypool btrfs source=/dev/disk/by-uuid/<xxxxxxxxxxxxx> size=<size>
|
|
I guess you are correct, this is how I created mine
Code: |
~> history | grep lxc | grep create
4559 [2022-01-18 22:17:54] lxc storage create mypool btrfs size=50GB
8512 [2022-05-04 13:19:15] history | grep lxc | grep create
~>
|
alamahant wrote: |
But i see mypool.img file in your case.
I would first move the ubuntu guest to default
Code: |
lxc move ubuntu ubuntu-default -s default
###then create an unformatted partition by gdisk or similar and....
lxc storage delete mypool
lxc storage create mypool btrfs source=/dev/disk/by-uuid/<xxxxxxxxxxxxxxx> size=<size>
lxc move ubuntu-default ubuntu -s mypool
|
You can get UUID by
blkid /dev/<new-partition> |
Ok, I can try that. But is there any way of recovering the data I had on the virtual machine prior to the execution of that procedure? |
|
Back to top |
|
|
vcmota Guru
Joined: 19 Jun 2017 Posts: 377
|
Posted: Wed May 04, 2022 4:34 pm Post subject: Re: Cant bring back my lxd virtual machine... |
|
|
Thank you Hu for your reply.
Hu wrote: | vcmota wrote: | and I simply closed it "by force". | This is almost always a bad idea, and is especially bad in the middle of a system upgrade. Even once you get the guest to start, you may find it to be corrupted by the abrupt termination. |
Yeap, I felt myself very dumb immediately after realizing what just happened...
Hu wrote: | vcmota wrote: | Considering the output above, perhaps mypool is only the mount point, and before increasing it I should have increased /dev/loop0, but I have no idea how to do it. That is so because /dev/loop0 appears to be a real device. Also, if it is a real disk, I should at least be able to mount it elsewhere and extract my files... | This is not lxc-specific. Loop devices are used to make filesystem files act as block devices, for dealing with cases where the consumer wants a block device instead. You can grow loop0 by detaching it from the underlying file, growing that file, then attaching it again. You could also grow the file and then force the loop system to discover the change. Either way, that will make the block device bigger, but then you will need to grow the filesystem on that block device. |
This feels as something that I may be capable of doing by myself, but is there any reading that you suggest? Because I have formatted partitions since forever, but reading your reply made me believe that in this case it would be something much more delicate than what I am used to do. Also, is there any way of recovering my user data before? |
|
Back to top |
|
|
alamahant Advocate
Joined: 23 Mar 2019 Posts: 3916
|
Posted: Wed May 04, 2022 6:04 pm Post subject: |
|
|
Quote: |
Ok, I can try that. But is there any way of recovering the data I had on the virtual machine prior to the execution of that procedure?
|
Yes first you move it to default pool and after new pool is created you bring it back to mypool.
Code: |
lxc move ubuntu-lts ubuntu-lts-moved -s default
lxc move ubuntu-lts-moved ubuntu-lts -s mypool
|
Plz see my previous post. _________________
|
|
Back to top |
|
|
Hu Administrator
Joined: 06 Mar 2007 Posts: 22634
|
Posted: Thu May 05, 2022 1:39 am Post subject: |
|
|
I have no specific documentation to suggest. Any user data that was in memory in the guest was lost when you terminated it. We do not yet know if or how seriously its on disk data was damaged. Referring to a backup would be the safest answer. |
|
Back to top |
|
|
Juippisi Developer
Joined: 30 Sep 2005 Posts: 750 Location: /home
|
Posted: Thu May 05, 2022 5:48 am Post subject: Re: Cant bring back my lxd virtual machine... |
|
|
vcmota wrote: |
Because of the first error I though it could be a problem of disk space. So I tried to increase my pool volume with the following commands:
Code: |
~> history | grep storage | grep set
8585 [2022-05-04 07:57:29] lxc storage set mypool volume.size 70GB
8587 [2022-05-04 07:58:38] lxc storage set mypool size 70GB
8614 [2022-05-04 08:51:58] history | grep storage | grep set
~>
|
The commands have indeed increased my pool volume size:
|
AFAIK these commands only have an effect when you're creating the image. Once it's done, you'll have to manually update the disk size using qemu-img resize. See instructions here:
https://wiki.gentoo.org/wiki/LXD#Disk_size
Note that it could also be btrfs related, I have no experience with that. If worst comes to worst, you may have to copy your image to a new bigger one (with dd I guess?) |
|
Back to top |
|
|
alamahant Advocate
Joined: 23 Mar 2019 Posts: 3916
|
Posted: Thu May 05, 2022 4:47 pm Post subject: |
|
|
If you decide to go the "qemu-img" way plz find it in
Code: |
app-emulation/libguestfs
|
pacjage. _________________
|
|
Back to top |
|
|
vcmota Guru
Joined: 19 Jun 2017 Posts: 377
|
Posted: Mon May 16, 2022 2:56 pm Post subject: |
|
|
alamahant wrote: |
You should have created a partition and assigned it to an lxd btrfs pool.
Something like
Code: |
lxc storage create mypool btrfs source=/dev/disk/by-uuid/<xxxxxxxxxxxxx> size=<size>
|
You can get UUID by
blkid /dev/<new-partition> |
Thank you alamahant for your reply.
There is something here that I do not understand at all: how to create this partition in a disk that is in use? Because my lxc image was created in my home, which is a partition on its own right in a physical disk which has no empty space, meaning its full capacity is already 100% allocated either in / or swap. |
|
Back to top |
|
|
alamahant Advocate
Joined: 23 Mar 2019 Posts: 3916
|
Posted: Mon May 16, 2022 5:11 pm Post subject: |
|
|
Quote: |
which is a partition on its own right in a physical disk which has no empty space
|
Disk space is cheap nowadays.
Maybe consider adding some?
Or shrinking the / partition?
Or go the "qemu-img resize" way. _________________
|
|
Back to top |
|
|
Hu Administrator
Joined: 06 Mar 2007 Posts: 22634
|
Posted: Mon May 16, 2022 6:14 pm Post subject: |
|
|
Changing the partition table of a disk which has mounted filesystems will usually require a reboot to reread the table.
I suspect that vcmota is describing this in a confusing way. I think the likely topology is: physical disk -> partition -> host btrfs(?). On that btrfs, there is a /var/lib/lxd/storage-pools/mypool which is a file that has been bound to a loopback device. The contents of that file is a btrfs filesystem, into which the guest's data was placed, possibly with a few more levels of indirection just to ensure confusion and complication. The problem is that the file needs to be bigger, and the btrfs within that file needs to be bigger. Juippisi previously suggested that a direct qemu-img use may be needed, because that will resize the host side storage of the guest's data. |
|
Back to top |
|
|
vcmota Guru
Joined: 19 Jun 2017 Posts: 377
|
Posted: Mon May 16, 2022 8:26 pm Post subject: |
|
|
Thank you alamahant for your reply.
alamahant wrote: |
Disk space is cheap nowadays.
Maybe consider adding some? |
I believe I cant: the main disk in my dell has 500GB of space and apparently I cant alter the size of this main disk. I have another disk in a second slot, but it is already formatted as a backup. Also, I went through hell to successfully install gentoo in this laptop, and removing the disk will set me back many days/weeks which is something that right now I cant afford.
alamahant wrote: |
Or shrinking the / partition? |
That is something that scares me deeply. If I make any mistake I would have to reinstall everything, which, again, will take away from me a time that I cant afford to lose right now.
alamahant wrote: |
Or go the "qemu-img resize" way. |
I believe I screw up that opportunity. Today I decided to follow your early suggestion (moving into default pool, creating new mypool, moving back to mypool). But since I lost myself in the partition issue, I ended up recreating mypool exactly the same way as I have created the first one:
Code: |
~> history | grep lxc | grep create
8382 [2022-05-04 13:19:11] history | grep lxc | create
8489 [2022-05-16 13:39:39] lxc storage create mypool btrfs size=70GB
~>
|
and than moved back. Two things happened: 1) the new pool mypool now have its size properly recognized by the system (70GB), as shown in the output of def:
Code: |
~> df -h
/dev/loop0 66G 55G 9,2G 86% /var/lib/lxd/storage-pools/mypool
~>
|
but 2) the machine just cant be started:
Code: |
~> lxc start ubuntu-lts
Error: Failed to run: forklimits limit=memlock:unlimited:unlimited -- /usr/bin/qemu-system-x86_64 -S -name ubuntu-lts -uuid 5de2f6da-f44b-4648-a1f3-266cf3306b59 -daemonize -cpu host,hv_passthrough -nographic -serial chardev:console -nodefaults -no-user-config -sandbox on,obsolete=deny,elevateprivileges=allow,spawn=deny,resourcecontrol=deny -readconfig /var/log/lxd/ubuntu-lts/qemu.conf -spice unix=on,disable-ticketing=on,addr=/var/log/lxd/ubuntu-lts/qemu.spice -pidfile /var/log/lxd/ubuntu-lts/qemu.pid -D /var/log/lxd/ubuntu-lts/qemu.log -smbios type=2,manufacturer=Canonical Ltd.,product=LXD -runas nobody: : Process exited with non-zero value -1
Try `lxc info --show-log ubuntu-lts` for more info
~> lxc info --show-log ubuntu-lts
Name: ubuntu-lts
Location: none
Remote: unix://
Arquitetura: x86_64
Criado: 2022/05/16 13:41 -03
Status: Stopped
Type: virtual-machine
Profiles: default
Log:
~>
|
So at some time in the process I probably ruined my lxd/lxc install. I say that because thereafter I just created another pools, tried to download novel virtual machines into them, and none of them starts. It has been a bad day... |
|
Back to top |
|
|
alamahant Advocate
Joined: 23 Mar 2019 Posts: 3916
|
Posted: Mon May 16, 2022 9:01 pm Post subject: |
|
|
Nothing to worry too much about.
Apparently you lost the ubuntu container.
Just unmerge lxd ,remove /var/lib/lxd/* ,re-emerge lxd and rerun "lxd init".
You will be up and running very quickly.
But its up to you how much you want to salvage your current environment and possibly your vm.
Plz see
https://wiki.gentoo.org/wiki/LXD#Running_systemd_based_containers_on_OpenRC_hosts
But to quote Neddy
Quote: |
Debug it. Don't start over. That's not the Gentoo way.
|
So first maybe try your utmost to debug your situation. _________________
|
|
Back to top |
|
|
vcmota Guru
Joined: 19 Jun 2017 Posts: 377
|
Posted: Tue May 17, 2022 2:14 am Post subject: |
|
|
I am trying but the debug option shows me nothing that I am able to work with. In fact, apart from what appears to be initialization code lines, it gives me the same error message as when I try to start the virtual machine:
Code: |
~> lxc start ubuntu-lts --debug
DBUG[05-16|23:09:53] Connecting to a local LXD over a Unix socket
DBUG[05-16|23:09:53] Sending request to LXD method=GET url=http://unix.socket/1.0 etag=
DBUG[05-16|23:09:53] Got response struct from LXD
DBUG[05-16|23:09:53]
{
"config": {},
"api_extensions": [
"storage_zfs_remove_snapshots",
"container_host_shutdown_timeout",
"container_stop_priority",
"container_syscall_filtering",
"auth_pki",
"container_last_used_at",
"etag",
"patch",
"usb_devices",
"https_allowed_credentials",
"image_compression_algorithm",
"directory_manipulation",
"container_cpu_time",
"storage_zfs_use_refquota",
"storage_lvm_mount_options",
"network",
"profile_usedby",
"container_push",
"container_exec_recording",
"certificate_update",
"container_exec_signal_handling",
"gpu_devices",
"container_image_properties",
"migration_progress",
"id_map",
"network_firewall_filtering",
"network_routes",
"storage",
"file_delete",
"file_append",
"network_dhcp_expiry",
"storage_lvm_vg_rename",
"storage_lvm_thinpool_rename",
"network_vlan",
"image_create_aliases",
"container_stateless_copy",
"container_only_migration",
"storage_zfs_clone_copy",
"unix_device_rename",
"storage_lvm_use_thinpool",
"storage_rsync_bwlimit",
"network_vxlan_interface",
"storage_btrfs_mount_options",
"entity_description",
"image_force_refresh",
"storage_lvm_lv_resizing",
"id_map_base",
"file_symlinks",
"container_push_target",
"network_vlan_physical",
"storage_images_delete",
"container_edit_metadata",
"container_snapshot_stateful_migration",
"storage_driver_ceph",
"storage_ceph_user_name",
"resource_limits",
"storage_volatile_initial_source",
"storage_ceph_force_osd_reuse",
"storage_block_filesystem_btrfs",
"resources",
"kernel_limits",
"storage_api_volume_rename",
"macaroon_authentication",
"network_sriov",
"console",
"restrict_devlxd",
"migration_pre_copy",
"infiniband",
"maas_network",
"devlxd_events",
"proxy",
"network_dhcp_gateway",
"file_get_symlink",
"network_leases",
"unix_device_hotplug",
"storage_api_local_volume_handling",
"operation_description",
"clustering",
"event_lifecycle",
"storage_api_remote_volume_handling",
"nvidia_runtime",
"container_mount_propagation",
"container_backup",
"devlxd_images",
"container_local_cross_pool_handling",
"proxy_unix",
"proxy_udp",
"clustering_join",
"proxy_tcp_udp_multi_port_handling",
"network_state",
"proxy_unix_dac_properties",
"container_protection_delete",
"unix_priv_drop",
"pprof_http",
"proxy_haproxy_protocol",
"network_hwaddr",
"proxy_nat",
"network_nat_order",
"container_full",
"candid_authentication",
"backup_compression",
"candid_config",
"nvidia_runtime_config",
"storage_api_volume_snapshots",
"storage_unmapped",
"projects",
"candid_config_key",
"network_vxlan_ttl",
"container_incremental_copy",
"usb_optional_vendorid",
"snapshot_scheduling",
"snapshot_schedule_aliases",
"container_copy_project",
"clustering_server_address",
"clustering_image_replication",
"container_protection_shift",
"snapshot_expiry",
"container_backup_override_pool",
"snapshot_expiry_creation",
"network_leases_location",
"resources_cpu_socket",
"resources_gpu",
"resources_numa",
"kernel_features",
"id_map_current",
"event_location",
"storage_api_remote_volume_snapshots",
"network_nat_address",
"container_nic_routes",
"rbac",
"cluster_internal_copy",
"seccomp_notify",
"lxc_features",
"container_nic_ipvlan",
"network_vlan_sriov",
"storage_cephfs",
"container_nic_ipfilter",
"resources_v2",
"container_exec_user_group_cwd",
"container_syscall_intercept",
"container_disk_shift",
"storage_shifted",
"resources_infiniband",
"daemon_storage",
"instances",
"image_types",
"resources_disk_sata",
"clustering_roles",
"images_expiry",
"resources_network_firmware",
"backup_compression_algorithm",
"ceph_data_pool_name",
"container_syscall_intercept_mount",
"compression_squashfs",
"container_raw_mount",
"container_nic_routed",
"container_syscall_intercept_mount_fuse",
"container_disk_ceph",
"virtual-machines",
"image_profiles",
"clustering_architecture",
"resources_disk_id",
"storage_lvm_stripes",
"vm_boot_priority",
"unix_hotplug_devices",
"api_filtering",
"instance_nic_network",
"clustering_sizing",
"firewall_driver",
"projects_limits",
"container_syscall_intercept_hugetlbfs",
"limits_hugepages",
"container_nic_routed_gateway",
"projects_restrictions",
"custom_volume_snapshot_expiry",
"volume_snapshot_scheduling",
"trust_ca_certificates",
"snapshot_disk_usage",
"clustering_edit_roles",
"container_nic_routed_host_address",
"container_nic_ipvlan_gateway",
"resources_usb_pci",
"resources_cpu_threads_numa",
"resources_cpu_core_die",
"api_os",
"resources_system",
"usedby_consistency",
"resources_gpu_mdev",
"console_vga_type",
"projects_limits_disk",
"storage_rsync_compression",
"gpu_mdev",
"resources_pci_iommu",
"resources_network_usb",
"resources_disk_address",
"network_state_vlan",
"gpu_sriov",
"migration_stateful",
"disk_state_quota",
"storage_ceph_features",
"gpu_mig",
"clustering_join_token",
"clustering_description",
"server_trusted_proxy",
"clustering_update_cert",
"storage_api_project",
"server_instance_driver_operational",
"server_supported_storage_drivers",
"event_lifecycle_requestor_address",
"resources_gpu_usb",
"network_counters_errors_dropped",
"image_source_project",
"database_leader",
"instance_all_projects",
"ceph_rbd_du",
"qemu_metrics",
"gpu_mig_uuid",
"event_project",
"instance_allow_inconsistent_copy",
"image_restrictions"
],
"api_status": "stable",
"api_version": "1.0",
"auth": "trusted",
"public": false,
"auth_methods": [
"tls"
],
"environment": {
"addresses": [],
"architectures": [
"x86_64",
"i686"
],
"certificate": "-----BEGIN CERTIFICATE-----\nMIICAzCCAYmgAwIBAgIQDkqGIKB58Br2/zENY1mlSTAKBggqhkjOPQQDAzA0MRww\nGgYDVQQKExNsaW51eGNvbnRhaW5lcnMub3JnMRQwEgYDVQQDDAtyb290QG1vcmFl\nczAeFw0yMjAxMTIxNjI1NTBaFw0zMjAxMTAxNjI1NTBaMDQxHDAaBgNVBAoTE2xp\nbnV4Y29udGFpbmVycy5vcmcxFDASBgNVBAMMC3Jvb3RAbW9yYWVzMHYwEAYHKoZI\nzj0CAQYFK4EEACIDYgAEWIIsKJG32AGxRe2OKmuT4L1g/+4x88q3ShKoMfyl1Wel\nlosUPyj9KCHci/NPakGyhAduWuiZ4xR8UVwgMgt1do+o7+c/PbbHaXKbJBz//PY5\n8Fx9ENtOF62D2z4Jg925o2AwXjAOBgNVHQ8BAf8EBAMCBaAwEwYDVR0lBAwwCgYI\nKwYBBQUHAwEwDAYDVR0TAQH/BAIwADApBgNVHREEIjAgggZtb3JhZXOHBH8AAAGH\nEAAAAAAAAAAAAAAAAAAAAAEwCgYIKoZIzj0EAwMDaAAwZQIxAO+7KbE0JIqJ8jcp\nZklV9KbghRvjpBKowEDLWwP4V9kmva5Rqe8FU54QC5fBimeynwIwW8hMUgsckfv5\nTX7rh/UVEQqZ2uRJeZtyVpXuhBbJU7bn5kunh+RFYbaDSqBnNzyI\n-----END CERTIFICATE-----\n",
"certificate_fingerprint": "573738ce2caaa0ec8429f1f9d9bff31746fd1fba8222ad2f412063ed9069d814",
"driver": "lxc | qemu",
"driver_version": "4.0.12 | 7.0.0",
"firewall": "xtables",
"kernel": "Linux",
"kernel_architecture": "x86_64",
"kernel_features": {
"netnsid_getifaddrs": "true",
"seccomp_listener": "true",
"seccomp_listener_continue": "true",
"shiftfs": "false",
"uevent_injection": "true",
"unpriv_fscaps": "true"
},
"kernel_version": "5.15.32-gentoo-r1-gentoo-dist",
"lxc_features": {
"cgroup2": "true",
"core_scheduling": "true",
"devpts_fd": "true",
"idmapped_mounts_v2": "true",
"mount_injection_file": "true",
"network_gateway_device_route": "true",
"network_ipvlan": "true",
"clustering_update_cert",
"storage_api_project",
"server_instance_driver_operational",
"server_supported_storage_drivers",
"event_lifecycle_requestor_address",
"resources_gpu_usb",
"network_counters_errors_dropped",
"image_source_project",
"database_leader",
"instance_all_projects",
"ceph_rbd_du",
"qemu_metrics",
"gpu_mig_uuid",
"event_project",
"instance_allow_inconsistent_copy",
"image_restrictions"
],
"api_status": "stable",
"api_version": "1.0",
"auth": "trusted",
"public": false,
"auth_methods": [
"tls"
],
"environment": {
"addresses": [],
"architectures": [
"x86_64",
"i686"
],
"certificate": "-----BEGIN CERTIFICATE-----\nMIICAzCCAYmgAwIBAgIQDkqGIKB58Br2/zENY1mlSTAKBggqhkjOPQQDAzA0MRww\nGgYDVQQKExNsaW51eGNvbnRhaW5lcnMub3JnMRQwEgYDVQQDDAtyb290QG1vcmFl\nczAeFw0yMjAxMTIxNjI1NTBaFw0zMjAxMTAxNjI1NTBaMDQxHDAaBgNVBAoTE2xp\nbnV4Y29udGFpbmVycy5vcmcxFDASBgNVBAMMC3Jvb3RAbW9yYWVzMHYwEAYHKoZI\nzj0CAQYFK4EEACIDYgAEWIIsKJG32AGxRe2OKmuT4L1g/+4x88q3ShKoMfyl1Wel\nlosUPyj9KCHci/NPakGyhAduWuiZ4xR8UVwgMgt1do+o7+c/PbbHaXKbJBz//PY5\n8Fx9ENtOF62D2z4Jg925o2AwXjAOBgNVHQ8BAf8EBAMCBaAwEwYDVR0lBAwwCgYI\nKwYBBQUHAwEwDAYDVR0TAQH/BAIwADApBgNVHREEIjAgggZtb3JhZXOHBH8AAAGH\nEAAAAAAAAAAAAAAAAAAAAAEwCgYIKoZIzj0EAwMDaAAwZQIxAO+7KbE0JIqJ8jcp\nZklV9KbghRvjpBKowEDLWwP4V9kmva5Rqe8FU54QC5fBimeynwIwW8hMUgsckfv5\nTX7rh/UVEQqZ2uRJeZtyVpXuhBbJU7bn5kunh+RFYbaDSqBnNzyI\n-----END CERTIFICATE-----\n",
"certificate_fingerprint": "573738ce2caaa0ec8429f1f9d9bff31746fd1fba8222ad2f412063ed9069d814",
"driver": "lxc | qemu",
"driver_version": "4.0.12 | 7.0.0",
"firewall": "xtables",
"kernel": "Linux",
"kernel_architecture": "x86_64",
"kernel_features": {
"netnsid_getifaddrs": "true",
"seccomp_listener": "true",
"seccomp_listener_continue": "true",
"shiftfs": "false",
"uevent_injection": "true",
"unpriv_fscaps": "true"
},
"kernel_version": "5.15.32-gentoo-r1-gentoo-dist",
"lxc_features": {
"cgroup2": "true",
"core_scheduling": "true",
"devpts_fd": "true",
"idmapped_mounts_v2": "true",
"mount_injection_file": "true",
"network_gateway_device_route": "true",
"network_ipvlan": "true",
"network_l2proxy": "true",
"network_phys_macvlan_mtu": "true",
"network_veth_router": "true",
"pidfd": "true",
"seccomp_allow_deny_syntax": "true",
"seccomp_notify": "true",
"seccomp_proxy_send_notify_fd": "true"
},
"os_name": "Gentoo",
"os_version": "",
"project": "default",
"server": "lxd",
"server_clustered": false,
"server_name": "moraes",
"server_pid": 8653,
"server_version": "4.0.9",
"storage": "dir | btrfs",
"storage_version": "1 | 5.15.1",
"storage_supported_drivers": [
{
"Name": "dir",
"Version": "1",
"Remote": false
},
{
"Name": "lvm",
"Version": "2.02.188(2) (2021-05-07) / 1.02.172 (2021-05-07) / 4.45.0",
"Remote": false
},
{
"Name": "zfs",
"Version": "2.1.4-r1-gentoo",
"Remote": false
},
{
"Name": "btrfs",
"Version": "5.15.1",
"Remote": false
}
]
}
}
DBUG[05-16|23:09:53] Sending request to LXD method=GET url=http://unix.socket/1.0/instances/ubuntu-lts etag=
DBUG[05-16|23:09:53] Got response struct from LXD
DBUG[05-16|23:09:53]
{
"architecture": "x86_64",
"config": {
"image.architecture": "amd64",
"image.description": "Ubuntu hirsute amd64 (20220117_00:58)",
"image.os": "Ubuntu",
"image.release": "hirsute",
"image.serial": "20220117_00:58",
"image.type": "disk-kvm.img",
"image.variant": "desktop",
"limits.cpu": "4",
"limits.memory": "12GB",
"security.secureboot": "false",
"volatile.base_image": "9f75ccbf4fdf4799fa521572a56f33d2b4dd92efcdf5513de61339cb97c8268f",
"volatile.eth0.hwaddr": "00:16:3e:8a:9e:79",
"volatile.uuid": "5de2f6da-f44b-4648-a1f3-266cf3306b59",
"volatile.vsock_id": "16"
},
"devices": {
"root": {
"path": "/",
"pool": "mypool",
"size": "50GiB",
"type": "disk"
}
},
"ephemeral": false,
"profiles": [
"default"
],
"stateful": false,
"description": "",
"created_at": "2022-05-16T16:41:03.316302034Z",
"expanded_config": {
"image.architecture": "amd64",
"image.description": "Ubuntu hirsute amd64 (20220117_00:58)",
"image.os": "Ubuntu",
"image.release": "hirsute",
"image.serial": "20220117_00:58",
"image.type": "disk-kvm.img",
"image.variant": "desktop",
"limits.cpu": "4",
"limits.memory": "12GB",
"security.secureboot": "false",
"volatile.base_image": "9f75ccbf4fdf4799fa521572a56f33d2b4dd92efcdf5513de61339cb97c8268f",
"volatile.eth0.hwaddr": "00:16:3e:8a:9e:79",
"volatile.uuid": "5de2f6da-f44b-4648-a1f3-266cf3306b59",
"volatile.vsock_id": "16"
},
"expanded_devices": {
"eth0": {
"name": "eth0",
"network": "lxdbr0",
"type": "nic"
},
"root": {
"path": "/",
"pool": "mypool",
"size": "50GiB",
"type": "disk"
}
},
"name": "ubuntu-lts",
"status": "Stopped",
"status_code": 102,
"last_used_at": "1970-01-01T00:00:00Z",
"location": "none",
"type": "virtual-machine",
"project": "default"
}
DBUG[05-16|23:09:53] Connected to the websocket: ws://unix.socket/1.0/events
DBUG[05-16|23:09:53] Sending request to LXD method=PUT url=http://unix.socket/1.0/instances/ubuntu-lts/state etag=
DBUG[05-16|23:09:53]
{
"action": "start",
"timeout": 0,
"force": false,
"stateful": false
}
DBUG[05-16|23:09:53] Got operation from LXD
DBUG[05-16|23:09:53]
{
"id": "36eed315-b27a-4df1-b6e4-1999e9505303",
"class": "task",
"description": "Starting instance",
"created_at": "2022-05-16T23:09:53.171567967-03:00",
"updated_at": "2022-05-16T23:09:53.171567967-03:00",
"status": "Running",
"status_code": 103,
"resources": {
"instances": [
"/1.0/instances/ubuntu-lts"
]
},
"metadata": null,
"may_cancel": false,
"err": "",
"location": "none"
}
DBUG[05-16|23:09:53] Sending request to LXD method=GET url=http://unix.socket/1.0/operations/36eed315-b27a-4df1-b6e4-1999e9505303 etag=
DBUG[05-16|23:09:53] Got response struct from LXD
DBUG[05-16|23:09:53]
{
"id": "36eed315-b27a-4df1-b6e4-1999e9505303",
"class": "task",
"description": "Starting instance",
"created_at": "2022-05-16T23:09:53.171567967-03:00",
"updated_at": "2022-05-16T23:09:53.171567967-03:00",
"status": "Running",
"status_code": 103,
"resources": {
"instances": [
"/1.0/instances/ubuntu-lts"
]
},
"metadata": null,
"may_cancel": false,
"err": "",
"location": "none"
}
Error: Failed to run: forklimits limit=memlock:unlimited:unlimited -- /usr/bin/qemu-system-x86_64 -S -name ubuntu-lts -uuid 5de2f6da-f44b-4648-a1f3-266cf3306b59 -daemonize -cpu host,hv_passthrough -nographic -serial chardev:console -nodefaults -no-user-config -sandbox on,obsolete=deny,elevateprivileges=allow,spawn=deny,resourcecontrol=deny -readconfig /var/log/lxd/ubuntu-lts/qemu.conf -spice unix=on,disable-ticketing=on,addr=/var/log/lxd/ubuntu-lts/qemu.spice -pidfile /var/log/lxd/ubuntu-lts/qemu.pid -D /var/log/lxd/ubuntu-lts/qemu.log -smbios type=2,manufacturer=Canonical Ltd.,product=LXD -runas nobody: : Process exited with non-zero value -1
Try `lxc info --show-log ubuntu-lts` for more info
~> lxc info --show-log ubuntu-lts
Name: ubuntu-lts
Location: none
Remote: unix://
Arquitetura: x86_64
Criado: 2022/05/16 13:41 -03
Status: Stopped
Type: virtual-machine
Profiles: default
Log:
~>
|
|
|
Back to top |
|
|
vcmota Guru
Joined: 19 Jun 2017 Posts: 377
|
Posted: Thu Apr 18, 2024 5:22 pm Post subject: |
|
|
This is just being solved in this post here, where Hu saved the day!!! The problem was in my bash_profile, which was just all wrong... |
|
Back to top |
|
|
|
|
You cannot post new topics in this forum You cannot reply to topics in this forum You cannot edit your posts in this forum You cannot delete your posts in this forum You cannot vote in polls in this forum
|
|