started: (0/1/1) "[sig-storage] PersistentVolumes-local [Volume type: block] Two pods mounting a local volume at the same time should be able to write from pod1 and read from pod2 [Suite:openshift/conformance/parallel] [Suite:k8s]" I0615 17:24:04.725729 668182 test_context.go:423] Tolerating taints "node-role.kubernetes.io/master" when considering if nodes are ready Jun 15 17:24:04.757: INFO: Waiting up to 30m0s for all (but 100) nodes to be schedulable Jun 15 17:24:05.733: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Jun 15 17:24:06.251: INFO: 0 / 0 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) Jun 15 17:24:06.251: INFO: expected 0 pod replicas in namespace 'kube-system', 0 are Running and Ready. Jun 15 17:24:06.251: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start Jun 15 17:24:06.526: INFO: e2e test version: v1.18.3 Jun 15 17:24:06.684: INFO: kube-apiserver version: v1.18.3+a637491 Jun 15 17:24:06.855: INFO: Cluster IP family: ipv4 [BeforeEach] [Top Level] /home/fidencio/src/upstream/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/framework.go:1413 [BeforeEach] [Top Level] /home/fidencio/src/upstream/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/framework.go:1413 [BeforeEach] [Top Level] /home/fidencio/src/upstream/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:58 [BeforeEach] [sig-storage] PersistentVolumes-local /home/fidencio/src/upstream/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client STEP: Building a namespace api object, basename persistent-local-volumes-test Jun 15 17:24:07.410: INFO: About to run a Kube e2e test, ensuring namespace is privileged Jun 15 17:24:09.989: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /home/fidencio/src/upstream/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:155 [BeforeEach] [Volume type: block] /home/fidencio/src/upstream/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:191 STEP: Initializing test volumes STEP: Creating block device on node "master-0-0" using path "/tmp/local-volume-test-80d0b3db-c980-413e-b44d-a9519948f025" Jun 15 17:24:13.393: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-80d0b3db-c980-413e-b44d-a9519948f025 && dd if=/dev/zero of=/tmp/local-volume-test-80d0b3db-c980-413e-b44d-a9519948f025/file bs=4096 count=5120 && losetup -f /tmp/local-volume-test-80d0b3db-c980-413e-b44d-a9519948f025/file] Namespace:e2e-persistent-local-volumes-test-1673 PodName:hostexec-master-0-0-d5bh8 ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true} Jun 15 17:24:14.668: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c E2E_LOOP_DEV=$(losetup | grep /tmp/local-volume-test-80d0b3db-c980-413e-b44d-a9519948f025/file | awk '{ print $1 }') 2>&1 > /dev/null && echo ${E2E_LOOP_DEV}] Namespace:e2e-persistent-local-volumes-test-1673 PodName:hostexec-master-0-0-d5bh8 ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true} STEP: Creating local PVCs and PVs Jun 15 17:24:15.770: INFO: Creating a PV followed by a PVC Jun 15 17:24:16.111: INFO: Waiting for PV local-pvdh49b to bind to PVC pvc-2wkx9 Jun 15 17:24:16.111: INFO: Waiting up to 3m0s for PersistentVolumeClaims [pvc-2wkx9] to have phase Bound Jun 15 17:24:16.270: INFO: PersistentVolumeClaim pvc-2wkx9 found and phase=Bound (158.468278ms) Jun 15 17:24:16.270: INFO: Waiting up to 3m0s for PersistentVolume local-pvdh49b to have phase Bound Jun 15 17:24:16.431: INFO: PersistentVolume local-pvdh49b found and phase=Bound (161.396384ms) [It] should be able to write from pod1 and read from pod2 [Suite:openshift/conformance/parallel] [Suite:k8s] /home/fidencio/src/upstream/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:245 STEP: Creating pod1 to write to the PV STEP: Creating a pod [AfterEach] [Volume type: block] /home/fidencio/src/upstream/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:200 STEP: Cleaning up PVC and PV Jun 15 17:26:17.591: INFO: Deleting PersistentVolumeClaim "pvc-2wkx9" Jun 15 17:26:17.798: INFO: Deleting PersistentVolume "local-pvdh49b" Jun 15 17:26:17.959: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c E2E_LOOP_DEV=$(losetup | grep /tmp/local-volume-test-80d0b3db-c980-413e-b44d-a9519948f025/file | awk '{ print $1 }') 2>&1 > /dev/null && echo ${E2E_LOOP_DEV}] Namespace:e2e-persistent-local-volumes-test-1673 PodName:hostexec-master-0-0-d5bh8 ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true} STEP: Tear down block device "/dev/loop2" on node "master-0-0" at path /tmp/local-volume-test-80d0b3db-c980-413e-b44d-a9519948f025/file Jun 15 17:26:19.197: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c losetup -d /dev/loop2] Namespace:e2e-persistent-local-volumes-test-1673 PodName:hostexec-master-0-0-d5bh8 ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true} STEP: Removing the test directory /tmp/local-volume-test-80d0b3db-c980-413e-b44d-a9519948f025 Jun 15 17:26:20.525: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-80d0b3db-c980-413e-b44d-a9519948f025] Namespace:e2e-persistent-local-volumes-test-1673 PodName:hostexec-master-0-0-d5bh8 ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true} [AfterEach] [sig-storage] PersistentVolumes-local /home/fidencio/src/upstream/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:179 STEP: Collecting events from namespace "e2e-persistent-local-volumes-test-1673". STEP: Found 10 events. Jun 15 17:26:22.031: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for hostexec-master-0-0-d5bh8: {default-scheduler } Scheduled: Successfully assigned e2e-persistent-local-volumes-test-1673/hostexec-master-0-0-d5bh8 to master-0-0 Jun 15 17:26:22.031: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for security-context-f802709b-9458-41c4-9af9-bc3a09c1605c: {default-scheduler } Scheduled: Successfully assigned e2e-persistent-local-volumes-test-1673/security-context-f802709b-9458-41c4-9af9-bc3a09c1605c to master-0-0 Jun 15 17:26:22.031: INFO: At 2020-06-15 17:24:11 +0200 CEST - event for hostexec-master-0-0-d5bh8: {kubelet master-0-0} Pulled: Container image "us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12" already present on machine Jun 15 17:26:22.031: INFO: At 2020-06-15 17:24:11 +0200 CEST - event for hostexec-master-0-0-d5bh8: {kubelet master-0-0} Created: Created container agnhost Jun 15 17:26:22.031: INFO: At 2020-06-15 17:24:11 +0200 CEST - event for hostexec-master-0-0-d5bh8: {kubelet master-0-0} Started: Started container agnhost Jun 15 17:26:22.031: INFO: At 2020-06-15 17:24:17 +0200 CEST - event for security-context-f802709b-9458-41c4-9af9-bc3a09c1605c: {kubelet master-0-0} SuccessfulMountVolume: MapVolume.MapPodDevice succeeded for volume "local-pvdh49b" globalMapPath "/var/lib/kubelet/plugins/kubernetes.io~local-volume/volumeDevices/local-pvdh49b" Jun 15 17:26:22.031: INFO: At 2020-06-15 17:24:17 +0200 CEST - event for security-context-f802709b-9458-41c4-9af9-bc3a09c1605c: {kubelet master-0-0} SuccessfulMountVolume: MapVolume.MapPodDevice succeeded for volume "local-pvdh49b" volumeMapPath "/var/lib/kubelet/pods/c4132657-8a9f-40b4-808c-6cca54f7e6ec/volumeDevices/kubernetes.io~local-volume" Jun 15 17:26:22.031: INFO: At 2020-06-15 17:24:19 +0200 CEST - event for security-context-f802709b-9458-41c4-9af9-bc3a09c1605c: {multus } AddedInterface: Add eth0 [10.129.0.28/23] Jun 15 17:26:22.031: INFO: At 2020-06-15 17:24:27 +0200 CEST - event for security-context-f802709b-9458-41c4-9af9-bc3a09c1605c: {kubelet master-0-0} Pulled: Container image "docker.io/library/busybox:1.29" already present on machine Jun 15 17:26:22.031: INFO: At 2020-06-15 17:24:30 +0200 CEST - event for security-context-f802709b-9458-41c4-9af9-bc3a09c1605c: {kubelet master-0-0} Failed: Error: CreateContainer failed: Timeout reached after 3s waiting for device 0:0:0:0/block: unknown Jun 15 17:26:22.301: INFO: POD NODE PHASE GRACE CONDITIONS Jun 15 17:26:22.301: INFO: hostexec-master-0-0-d5bh8 master-0-0 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-15 17:24:10 +0200 CEST } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-06-15 17:24:11 +0200 CEST } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-06-15 17:24:11 +0200 CEST } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-15 17:24:10 +0200 CEST }] Jun 15 17:26:22.302: INFO: security-context-f802709b-9458-41c4-9af9-bc3a09c1605c master-0-0 Pending [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-15 17:24:16 +0200 CEST } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-15 17:24:16 +0200 CEST ContainersNotReady containers with unready status: [write-pod]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-15 17:24:16 +0200 CEST ContainersNotReady containers with unready status: [write-pod]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-15 17:24:16 +0200 CEST }] Jun 15 17:26:22.302: INFO: Jun 15 17:26:22.504: INFO: hostexec-master-0-0-d5bh8[e2e-persistent-local-volumes-test-1673].container[agnhost].log Paused Jun 15 17:26:22.667: INFO: unable to fetch logs for pods: security-context-f802709b-9458-41c4-9af9-bc3a09c1605c[e2e-persistent-local-volumes-test-1673].container[write-pod].error=the server rejected our request for an unknown reason (get pods security-context-f802709b-9458-41c4-9af9-bc3a09c1605c) Jun 15 17:26:23.223: INFO: skipping dumping cluster info - cluster too large Jun 15 17:26:23.223: INFO: Waiting up to 7m0s for all (but 100) nodes to be ready STEP: Destroying namespace "e2e-persistent-local-volumes-test-1673" for this suite. Jun 15 17:26:23.804: INFO: Running AfterSuite actions on all nodes Jun 15 17:26:23.804: INFO: Running AfterSuite actions on node 1 fail [k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:713]: Unexpected error: <*errors.errorString | 0xc000450ba0>: { s: "pod \"security-context-f802709b-9458-41c4-9af9-bc3a09c1605c\" is not Running: timed out waiting for the condition", } pod "security-context-f802709b-9458-41c4-9af9-bc3a09c1605c" is not Running: timed out waiting for the condition occurred failed: (2m19s) 2020-06-15T15:26:23 "[sig-storage] PersistentVolumes-local [Volume type: block] Two pods mounting a local volume at the same time should be able to write from pod1 and read from pod2 [Suite:openshift/conformance/parallel] [Suite:k8s]" Timeline: Jun 15 17:24:04.602 - 139s I test="[sig-storage] PersistentVolumes-local [Volume type: block] Two pods mounting a local volume at the same time should be able to write from pod1 and read from pod2 [Suite:openshift/conformance/parallel] [Suite:k8s]" running Jun 15 15:24:11.003 I ns/e2e-persistent-local-volumes-test-1673 pod/hostexec-master-0-0-d5bh8 node/ reason/Created Jun 15 15:24:11.014 I ns/e2e-persistent-local-volumes-test-1673 pod/hostexec-master-0-0-d5bh8 node/master-0-0 reason/Scheduled Jun 15 15:24:11.741 I ns/e2e-persistent-local-volumes-test-1673 pod/hostexec-master-0-0-d5bh8 node/master-0-0 container/agnhost reason/Pulled image/us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12 Jun 15 15:24:11.879 I ns/e2e-persistent-local-volumes-test-1673 pod/hostexec-master-0-0-d5bh8 node/master-0-0 container/agnhost reason/Created Jun 15 15:24:11.910 I ns/e2e-persistent-local-volumes-test-1673 pod/hostexec-master-0-0-d5bh8 node/master-0-0 container/agnhost reason/Started Jun 15 15:24:12.011 I ns/e2e-persistent-local-volumes-test-1673 pod/hostexec-master-0-0-d5bh8 node/master-0-0 container/agnhost reason/Ready Jun 15 15:24:14.717 I ns/default pod/recycler-for-nfs-pv10 node/ reason/Created Jun 15 15:24:14.723 W persistentvolume/nfs-pv10 reason/RecyclerPod (combined from similar events): Recycler pod: MountVolume.SetUp failed for volume "vol" : mount failed: exit status 32\nMounting command: systemd-run\nMounting arguments: --description=Kubernetes transient mount for /var/lib/kubelet/pods/766d24e9-db28-4a6a-bc41-0c03fe3ad4e3/volumes/kubernetes.io~nfs/vol --scope -- mount -t nfs registry.kata-fidencio-0.qe.lab.redhat.com:/mnt/pv10 /var/lib/kubelet/pods/766d24e9-db28-4a6a-bc41-0c03fe3ad4e3/volumes/kubernetes.io~nfs/vol\nOutput: Running scope as unit: run-r39f8cab366bf47fb9b2f876cd0aeb7d3.scope\nmount.nfs: mounting registry.kata-fidencio-0.qe.lab.redhat.com:/mnt/pv10 failed, reason given by server: No such file or directory\n (29221 times) Jun 15 15:24:14.728 I ns/default pod/recycler-for-nfs-pv10 node/worker-0-0 reason/Scheduled Jun 15 15:24:15.139 W ns/default pod/recycler-for-nfs-pv10 node/worker-0-0 reason/FailedMount MountVolume.SetUp failed for volume "vol" : mount failed: exit status 32\nMounting command: systemd-run\nMounting arguments: --description=Kubernetes transient mount for /var/lib/kubelet/pods/fae88201-7403-4838-a6f0-347b1b83e8ec/volumes/kubernetes.io~nfs/vol --scope -- mount -t nfs registry.kata-fidencio-0.qe.lab.redhat.com:/mnt/pv10 /var/lib/kubelet/pods/fae88201-7403-4838-a6f0-347b1b83e8ec/volumes/kubernetes.io~nfs/vol\nOutput: Running scope as unit: run-r74366ccbe29e467aad8c46e596c540f6.scope\nmount.nfs: mounting registry.kata-fidencio-0.qe.lab.redhat.com:/mnt/pv10 failed, reason given by server: No such file or directory\n Jun 15 15:24:15.870 W ns/default pod/recycler-for-nfs-pv10 node/worker-0-0 reason/FailedMount MountVolume.SetUp failed for volume "vol" : mount failed: exit status 32\nMounting command: systemd-run\nMounting arguments: --description=Kubernetes transient mount for /var/lib/kubelet/pods/fae88201-7403-4838-a6f0-347b1b83e8ec/volumes/kubernetes.io~nfs/vol --scope -- mount -t nfs registry.kata-fidencio-0.qe.lab.redhat.com:/mnt/pv10 /var/lib/kubelet/pods/fae88201-7403-4838-a6f0-347b1b83e8ec/volumes/kubernetes.io~nfs/vol\nOutput: Running scope as unit: run-r8350498862e34331b7a0eab10359c5f5.scope\nmount.nfs: mounting registry.kata-fidencio-0.qe.lab.redhat.com:/mnt/pv10 failed, reason given by server: No such file or directory\n Jun 15 15:24:16.961 I ns/e2e-persistent-local-volumes-test-1673 pod/security-context-f802709b-9458-41c4-9af9-bc3a09c1605c node/ reason/Created Jun 15 15:24:16.967 I ns/e2e-persistent-local-volumes-test-1673 pod/security-context-f802709b-9458-41c4-9af9-bc3a09c1605c node/master-0-0 reason/Scheduled Jun 15 15:24:17.167 W ns/default pod/recycler-for-nfs-pv10 node/worker-0-0 reason/FailedMount MountVolume.SetUp failed for volume "vol" : mount failed: exit status 32\nMounting command: systemd-run\nMounting arguments: --description=Kubernetes transient mount for /var/lib/kubelet/pods/fae88201-7403-4838-a6f0-347b1b83e8ec/volumes/kubernetes.io~nfs/vol --scope -- mount -t nfs registry.kata-fidencio-0.qe.lab.redhat.com:/mnt/pv10 /var/lib/kubelet/pods/fae88201-7403-4838-a6f0-347b1b83e8ec/volumes/kubernetes.io~nfs/vol\nOutput: Running scope as unit: run-re3a34942411b475fbd43442d937adb26.scope\nmount.nfs: mounting registry.kata-fidencio-0.qe.lab.redhat.com:/mnt/pv10 failed, reason given by server: No such file or directory\n Jun 15 15:24:17.300 I ns/e2e-persistent-local-volumes-test-1673 pod/security-context-f802709b-9458-41c4-9af9-bc3a09c1605c node/master-0-0 reason/SuccessfulMountVolume MapVolume.MapPodDevice succeeded for volume "local-pvdh49b" globalMapPath "/var/lib/kubelet/plugins/kubernetes.io~local-volume/volumeDevices/local-pvdh49b" Jun 15 15:24:17.305 I ns/e2e-persistent-local-volumes-test-1673 pod/security-context-f802709b-9458-41c4-9af9-bc3a09c1605c node/master-0-0 reason/SuccessfulMountVolume MapVolume.MapPodDevice succeeded for volume "local-pvdh49b" volumeMapPath "/var/lib/kubelet/pods/c4132657-8a9f-40b4-808c-6cca54f7e6ec/volumeDevices/kubernetes.io~local-volume" Jun 15 15:24:19.266 W ns/openshift-operator-lifecycle-manager pod/packageserver-84cd46d9b4-2slc2 node/master-0-1 reason/Unhealthy Liveness probe failed: Get https://10.130.0.11:5443/healthz: net/http: request canceled (Client.Timeout exceeded while awaiting headers) (705 times) Jun 15 15:24:19.312 W ns/default pod/recycler-for-nfs-pv10 node/worker-0-0 reason/FailedMount MountVolume.SetUp failed for volume "vol" : mount failed: exit status 32\nMounting command: systemd-run\nMounting arguments: --description=Kubernetes transient mount for /var/lib/kubelet/pods/fae88201-7403-4838-a6f0-347b1b83e8ec/volumes/kubernetes.io~nfs/vol --scope -- mount -t nfs registry.kata-fidencio-0.qe.lab.redhat.com:/mnt/pv10 /var/lib/kubelet/pods/fae88201-7403-4838-a6f0-347b1b83e8ec/volumes/kubernetes.io~nfs/vol\nOutput: Running scope as unit: run-rf6fdee26e70d4ac5ab14355838eb8bad.scope\nmount.nfs: mounting registry.kata-fidencio-0.qe.lab.redhat.com:/mnt/pv10 failed, reason given by server: No such file or directory\n Jun 15 15:24:19.823 I ns/e2e-persistent-local-volumes-test-1673 pod/security-context-f802709b-9458-41c4-9af9-bc3a09c1605c reason/AddedInterface Add eth0 [10.129.0.28/23] Jun 15 15:24:23.496 W ns/default pod/recycler-for-nfs-pv10 node/worker-0-0 reason/FailedMount MountVolume.SetUp failed for volume "vol" : mount failed: exit status 32\nMounting command: systemd-run\nMounting arguments: --description=Kubernetes transient mount for /var/lib/kubelet/pods/fae88201-7403-4838-a6f0-347b1b83e8ec/volumes/kubernetes.io~nfs/vol --scope -- mount -t nfs registry.kata-fidencio-0.qe.lab.redhat.com:/mnt/pv10 /var/lib/kubelet/pods/fae88201-7403-4838-a6f0-347b1b83e8ec/volumes/kubernetes.io~nfs/vol\nOutput: Running scope as unit: run-rde3ac61751224a4e9934247a6a96a655.scope\nmount.nfs: mounting registry.kata-fidencio-0.qe.lab.redhat.com:/mnt/pv10 failed, reason given by server: No such file or directory\n Jun 15 15:24:25.158 I ns/openshift-cluster-storage-operator lease/snapshot-controller-leader reason/LeaderElection csi-snapshot-controller-654675b99d-kds5h stopped leading Jun 15 15:24:25.873 E ns/openshift-cluster-storage-operator pod/csi-snapshot-controller-654675b99d-kds5h node/worker-0-0 container/snapshot-controller container exited with code 255 (Error): Jun 15 15:24:26.028 W clusteroperator/csi-snapshot-controller changed Available to False: _AsExpected: Available: Waiting for Deployment to deploy csi-snapshot-controller pods Jun 15 15:24:26.028 W clusteroperator/csi-snapshot-controller changed Progressing to True: _AsExpected: Progressing: Waiting for Deployment to deploy csi-snapshot-controller pods Jun 15 15:24:26.029 I ns/openshift-cluster-storage-operator deployment/csi-snapshot-controller-operator reason/OperatorStatusChanged Status for clusteroperator/csi-snapshot-controller changed: Progressing changed from False to True ("Progressing: Waiting for Deployment to deploy csi-snapshot-controller pods"),Available changed from True to False ("Available: Waiting for Deployment to deploy csi-snapshot-controller pods") (371 times) Jun 15 15:24:27.468 I ns/e2e-persistent-local-volumes-test-1673 pod/security-context-f802709b-9458-41c4-9af9-bc3a09c1605c node/master-0-0 container/write-pod reason/Pulled image/docker.io/library/busybox:1.29 Jun 15 15:24:30.671 W ns/e2e-persistent-local-volumes-test-1673 pod/security-context-f802709b-9458-41c4-9af9-bc3a09c1605c node/master-0-0 reason/Failed Error: CreateContainer failed: Timeout reached after 3s waiting for device 0:0:0:0/block: unknown Jun 15 15:24:31.179 I ns/e2e-persistent-local-volumes-test-1673 pod/security-context-f802709b-9458-41c4-9af9-bc3a09c1605c node/master-0-0 container/write-pod reason/Pulled image/docker.io/library/busybox:1.29 Jun 15 15:24:31.655 W ns/default pod/recycler-for-nfs-pv10 node/worker-0-0 reason/FailedMount MountVolume.SetUp failed for volume "vol" : mount failed: exit status 32\nMounting command: systemd-run\nMounting arguments: --description=Kubernetes transient mount for /var/lib/kubelet/pods/fae88201-7403-4838-a6f0-347b1b83e8ec/volumes/kubernetes.io~nfs/vol --scope -- mount -t nfs registry.kata-fidencio-0.qe.lab.redhat.com:/mnt/pv10 /var/lib/kubelet/pods/fae88201-7403-4838-a6f0-347b1b83e8ec/volumes/kubernetes.io~nfs/vol\nOutput: Running scope as unit: run-r00b9d2cf10ac403fabce55a1d68c2b5d.scope\nmount.nfs: mounting registry.kata-fidencio-0.qe.lab.redhat.com:/mnt/pv10 failed, reason given by server: No such file or directory\n Jun 15 15:24:34.274 W ns/e2e-persistent-local-volumes-test-1673 pod/security-context-f802709b-9458-41c4-9af9-bc3a09c1605c node/master-0-0 reason/Failed Error: CreateContainer failed: Timeout reached after 3s waiting for device 0:0:0:0/block: unknown (2 times) Jun 15 15:24:45.019 I ns/e2e-persistent-local-volumes-test-1673 pod/security-context-f802709b-9458-41c4-9af9-bc3a09c1605c node/master-0-0 container/write-pod reason/Pulled image/docker.io/library/busybox:1.29 Jun 15 15:24:47.812 W ns/default pod/recycler-for-nfs-pv10 node/worker-0-0 reason/FailedMount MountVolume.SetUp failed for volume "vol" : mount failed: exit status 32\nMounting command: systemd-run\nMounting arguments: --description=Kubernetes transient mount for /var/lib/kubelet/pods/fae88201-7403-4838-a6f0-347b1b83e8ec/volumes/kubernetes.io~nfs/vol --scope -- mount -t nfs registry.kata-fidencio-0.qe.lab.redhat.com:/mnt/pv10 /var/lib/kubelet/pods/fae88201-7403-4838-a6f0-347b1b83e8ec/volumes/kubernetes.io~nfs/vol\nOutput: Running scope as unit: run-r4721a072eeda4b73ab50791c9e5c23db.scope\nmount.nfs: mounting registry.kata-fidencio-0.qe.lab.redhat.com:/mnt/pv10 failed, reason given by server: No such file or directory\n Jun 15 15:24:48.179 W ns/e2e-persistent-local-volumes-test-1673 pod/security-context-f802709b-9458-41c4-9af9-bc3a09c1605c node/master-0-0 reason/Failed Error: CreateContainer failed: Timeout reached after 3s waiting for device 0:0:0:0/block: unknown (3 times) Jun 15 15:24:59.346 I ns/openshift-config-operator deployment/openshift-config-operator reason/KubeCloudConfigController openshift-config-managed/kube-cloud-config ConfigMap was deleted as no longer required (3878 times) Jun 15 15:25:02.986 I ns/e2e-persistent-local-volumes-test-1673 pod/security-context-f802709b-9458-41c4-9af9-bc3a09c1605c node/master-0-0 container/write-pod reason/Pulled image/docker.io/library/busybox:1.29 Jun 15 15:25:06.215 W ns/e2e-persistent-local-volumes-test-1673 pod/security-context-f802709b-9458-41c4-9af9-bc3a09c1605c node/master-0-0 reason/Failed Error: CreateContainer failed: Timeout reached after 3s waiting for device 0:0:0:0/block: unknown (4 times) Jun 15 15:25:07.693 I ns/openshift-cluster-storage-operator lease/snapshot-controller-leader reason/LeaderElection csi-snapshot-controller-654675b99d-kds5h became leader Jun 15 15:25:08.031 I ns/openshift-cluster-storage-operator pod/csi-snapshot-controller-654675b99d-kds5h node/worker-0-0 container/snapshot-controller reason/Ready Jun 15 15:25:08.031 W ns/openshift-cluster-storage-operator pod/csi-snapshot-controller-654675b99d-kds5h node/worker-0-0 container/snapshot-controller reason/Restarted Jun 15 15:25:08.073 W clusteroperator/csi-snapshot-controller changed Available to True Jun 15 15:25:08.073 W clusteroperator/csi-snapshot-controller changed Progressing to False Jun 15 15:25:08.265 I ns/openshift-cluster-storage-operator deployment/csi-snapshot-controller-operator reason/OperatorStatusChanged Status for clusteroperator/csi-snapshot-controller changed: Progressing changed from True to False (""),Available changed from False to True ("") (371 times) Jun 15 15:25:19.602 - 59s W ns/e2e-persistent-local-volumes-test-1673 pod/security-context-f802709b-9458-41c4-9af9-bc3a09c1605c node/master-0-0 pod has been pending longer than a minute Jun 15 15:25:19.602 - 59s W ns/default pod/recycler-for-nfs-pv10 node/worker-0-0 pod has been pending longer than a minute Jun 15 15:25:20.046 W ns/default pod/recycler-for-nfs-pv10 node/worker-0-0 reason/FailedMount MountVolume.SetUp failed for volume "vol" : mount failed: exit status 32\nMounting command: systemd-run\nMounting arguments: --description=Kubernetes transient mount for /var/lib/kubelet/pods/fae88201-7403-4838-a6f0-347b1b83e8ec/volumes/kubernetes.io~nfs/vol --scope -- mount -t nfs registry.kata-fidencio-0.qe.lab.redhat.com:/mnt/pv10 /var/lib/kubelet/pods/fae88201-7403-4838-a6f0-347b1b83e8ec/volumes/kubernetes.io~nfs/vol\nOutput: Running scope as unit: run-rd525371b78274272bae17f870ab81874.scope\nmount.nfs: mounting registry.kata-fidencio-0.qe.lab.redhat.com:/mnt/pv10 failed, reason given by server: No such file or directory\n Jun 15 15:25:20.988 I ns/e2e-persistent-local-volumes-test-1673 pod/security-context-f802709b-9458-41c4-9af9-bc3a09c1605c node/master-0-0 container/write-pod reason/Pulled image/docker.io/library/busybox:1.29 Jun 15 15:25:24.242 W ns/e2e-persistent-local-volumes-test-1673 pod/security-context-f802709b-9458-41c4-9af9-bc3a09c1605c node/master-0-0 reason/Failed Error: CreateContainer failed: Timeout reached after 3s waiting for device 0:0:0:0/block: unknown (5 times) Jun 15 15:25:38.990 I ns/e2e-persistent-local-volumes-test-1673 pod/security-context-f802709b-9458-41c4-9af9-bc3a09c1605c node/master-0-0 container/write-pod reason/Pulled image/docker.io/library/busybox:1.29 Jun 15 15:25:42.263 W ns/e2e-persistent-local-volumes-test-1673 pod/security-context-f802709b-9458-41c4-9af9-bc3a09c1605c node/master-0-0 reason/Failed Error: CreateContainer failed: Timeout reached after 3s waiting for device 0:0:0:0/block: unknown (6 times) Jun 15 15:25:55.989 I ns/e2e-persistent-local-volumes-test-1673 pod/security-context-f802709b-9458-41c4-9af9-bc3a09c1605c node/master-0-0 container/write-pod reason/Pulled image/docker.io/library/busybox:1.29 Jun 15 15:25:59.183 W ns/e2e-persistent-local-volumes-test-1673 pod/security-context-f802709b-9458-41c4-9af9-bc3a09c1605c node/master-0-0 reason/Failed Error: CreateContainer failed: Timeout reached after 3s waiting for device 0:0:0:0/block: unknown (7 times) Jun 15 15:25:59.362 I ns/openshift-config-operator deployment/openshift-config-operator reason/KubeCloudConfigController openshift-config-managed/kube-cloud-config ConfigMap was deleted as no longer required (3879 times) Jun 15 15:26:11.985 I ns/e2e-persistent-local-volumes-test-1673 pod/security-context-f802709b-9458-41c4-9af9-bc3a09c1605c node/master-0-0 container/write-pod reason/Pulled image/docker.io/library/busybox:1.29 Jun 15 15:26:15.192 W ns/e2e-persistent-local-volumes-test-1673 pod/security-context-f802709b-9458-41c4-9af9-bc3a09c1605c node/master-0-0 reason/Failed Error: CreateContainer failed: Timeout reached after 3s waiting for device 0:0:0:0/block: unknown (8 times) Jun 15 15:26:17.798 W ns/default pod/recycler-for-nfs-pv10 node/worker-0-0 reason/FailedMount Unable to attach or mount volumes: unmounted volumes=[vol], unattached volumes=[vol default-token-w29vn]: timed out waiting for the condition Jun 15 17:26:23.819 I test="[sig-storage] PersistentVolumes-local [Volume type: block] Two pods mounting a local volume at the same time should be able to write from pod1 and read from pod2 [Suite:openshift/conformance/parallel] [Suite:k8s]" failed Failing tests: [sig-storage] PersistentVolumes-local [Volume type: block] Two pods mounting a local volume at the same time should be able to write from pod1 and read from pod2 [Suite:openshift/conformance/parallel] [Suite:k8s] error: 1 fail, 0 pass, 0 skip (2m19s)