"[sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Pre-provisioned PV (ext4)] volumes should store data [Suite:openshift/conformance/parallel] [Suite:k8s]" started: (0/1/1) "[sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Pre-provisioned PV (ext4)] volumes should store data [Suite:openshift/conformance/parallel] [Suite:k8s]" I0615 16:56:09.001740 665468 test_context.go:423] Tolerating taints "node-role.kubernetes.io/master" when considering if nodes are ready Jun 15 16:56:09.042: INFO: Waiting up to 30m0s for all (but 100) nodes to be schedulable Jun 15 16:56:11.846: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Jun 15 16:56:12.445: INFO: 0 / 0 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) Jun 15 16:56:12.445: INFO: expected 0 pod replicas in namespace 'kube-system', 0 are Running and Ready. Jun 15 16:56:12.445: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start Jun 15 16:56:12.618: INFO: e2e test version: v1.18.3 Jun 15 16:56:12.856: INFO: kube-apiserver version: v1.18.3+a637491 Jun 15 16:56:13.026: INFO: Cluster IP family: ipv4 [BeforeEach] [Top Level] /home/fidencio/src/upstream/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/framework.go:1413 [BeforeEach] [Top Level] /home/fidencio/src/upstream/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/framework.go:1413 [BeforeEach] [Top Level] /home/fidencio/src/upstream/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:58 [BeforeEach] [Testpattern: Pre-provisioned PV (ext4)] volumes /home/fidencio/src/upstream/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95 [BeforeEach] [Testpattern: Pre-provisioned PV (ext4)] volumes /home/fidencio/src/upstream/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client STEP: Building a namespace api object, basename volume Jun 15 16:56:13.651: INFO: About to run a Kube e2e test, ensuring namespace is privileged Jun 15 16:56:16.411: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [It] should store data [Suite:openshift/conformance/parallel] [Suite:k8s] /home/fidencio/src/upstream/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:152 Jun 15 16:56:17.083: INFO: Could not find CSI Name for in-tree plugin kubernetes.io/local-volume STEP: Creating block device on node "master-0-2" using path "/tmp/local-driver-d9c5003e-926b-48b7-93a6-a7dde1d0aa9c" Jun 15 16:56:19.632: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-driver-d9c5003e-926b-48b7-93a6-a7dde1d0aa9c && dd if=/dev/zero of=/tmp/local-driver-d9c5003e-926b-48b7-93a6-a7dde1d0aa9c/file bs=4096 count=5120 && losetup -f /tmp/local-driver-d9c5003e-926b-48b7-93a6-a7dde1d0aa9c/file] Namespace:e2e-volume-6569 PodName:hostexec-master-0-2-qzgvs ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true} Jun 15 16:56:20.940: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c E2E_LOOP_DEV=$(losetup | grep /tmp/local-driver-d9c5003e-926b-48b7-93a6-a7dde1d0aa9c/file | awk '{ print $1 }') 2>&1 > /dev/null && echo ${E2E_LOOP_DEV}] Namespace:e2e-volume-6569 PodName:hostexec-master-0-2-qzgvs ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true} Jun 15 16:56:22.158: INFO: Creating resource for pre-provisioned PV Jun 15 16:56:22.158: INFO: Creating PVC and PV STEP: Creating a PVC followed by a PV Jun 15 16:56:22.497: INFO: Waiting for PV local-4gwvn to bind to PVC pvc-mz6gm Jun 15 16:56:22.497: INFO: Waiting up to 3m0s for PersistentVolumeClaims [pvc-mz6gm] to have phase Bound Jun 15 16:56:22.706: INFO: PersistentVolumeClaim pvc-mz6gm found but phase is Pending instead of Bound. Jun 15 16:56:24.871: INFO: PersistentVolumeClaim pvc-mz6gm found but phase is Pending instead of Bound. Jun 15 16:56:27.032: INFO: PersistentVolumeClaim pvc-mz6gm found but phase is Pending instead of Bound. Jun 15 16:56:29.252: INFO: PersistentVolumeClaim pvc-mz6gm found but phase is Pending instead of Bound. Jun 15 16:56:31.415: INFO: PersistentVolumeClaim pvc-mz6gm found and phase=Bound (8.917463121s) Jun 15 16:56:31.415: INFO: Waiting up to 3m0s for PersistentVolume local-4gwvn to have phase Bound Jun 15 16:56:31.616: INFO: PersistentVolume local-4gwvn found and phase=Bound (200.388998ms) STEP: starting local-injector STEP: Writing text file contents in the container. Jun 15 16:56:36.555: INFO: Running '/home/fidencio/.local/bin/kubectl --server=https://api.kata-fidencio-0.qe.lab.redhat.com:6443 --kubeconfig=/home/fidencio/openshift/kata/clusterconfigs/auth/kubeconfig exec local-injector --namespace=e2e-volume-6569 -- /bin/sh -c echo 'Hello from local from namespace e2e-volume-6569' > /opt/0/index.html' Jun 15 16:56:41.673: INFO: stderr: "" Jun 15 16:56:41.673: INFO: stdout: "" STEP: Checking that text file contents are perfect. Jun 15 16:56:41.673: INFO: Running '/home/fidencio/.local/bin/kubectl --server=https://api.kata-fidencio-0.qe.lab.redhat.com:6443 --kubeconfig=/home/fidencio/openshift/kata/clusterconfigs/auth/kubeconfig exec local-injector --namespace=e2e-volume-6569 -- cat /opt/0/index.html' Jun 15 16:56:43.587: INFO: stderr: "" Jun 15 16:56:43.587: INFO: stdout: "Hello from local from namespace e2e-volume-6569\n" Jun 15 16:56:43.587: INFO: ExecWithOptions {Command:[/bin/sh -c test -d /opt/0] Namespace:e2e-volume-6569 PodName:local-injector ContainerName:local-injector Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jun 15 16:56:44.851: INFO: ExecWithOptions {Command:[/bin/sh -c test -b /opt/0] Namespace:e2e-volume-6569 PodName:local-injector ContainerName:local-injector Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} STEP: Checking fsType is correct. Jun 15 16:56:46.298: INFO: Running '/home/fidencio/.local/bin/kubectl --server=https://api.kata-fidencio-0.qe.lab.redhat.com:6443 --kubeconfig=/home/fidencio/openshift/kata/clusterconfigs/auth/kubeconfig exec local-injector --namespace=e2e-volume-6569 -- grep /opt/0 /proc/mounts' Jun 15 16:56:48.345: INFO: stderr: "" Jun 15 16:56:48.345: INFO: stdout: "/dev/loop0 /opt/0 ext4 rw,seclabel,relatime 0 0\n" STEP: Deleting pod local-injector in namespace e2e-volume-6569 Jun 15 16:56:48.516: INFO: Waiting for pod local-injector to disappear Jun 15 16:56:48.714: INFO: Pod local-injector still exists Jun 15 16:56:50.715: INFO: Waiting for pod local-injector to disappear Jun 15 16:56:50.932: INFO: Pod local-injector still exists Jun 15 16:56:52.715: INFO: Waiting for pod local-injector to disappear Jun 15 16:56:52.875: INFO: Pod local-injector still exists Jun 15 16:56:54.715: INFO: Waiting for pod local-injector to disappear Jun 15 16:56:54.896: INFO: Pod local-injector still exists Jun 15 16:56:56.715: INFO: Waiting for pod local-injector to disappear Jun 15 16:56:56.891: INFO: Pod local-injector still exists Jun 15 16:56:58.715: INFO: Waiting for pod local-injector to disappear Jun 15 16:56:58.881: INFO: Pod local-injector still exists Jun 15 16:57:00.715: INFO: Waiting for pod local-injector to disappear Jun 15 16:57:00.882: INFO: Pod local-injector still exists Jun 15 16:57:02.715: INFO: Waiting for pod local-injector to disappear Jun 15 16:57:02.879: INFO: Pod local-injector still exists Jun 15 16:57:04.715: INFO: Waiting for pod local-injector to disappear Jun 15 16:57:04.883: INFO: Pod local-injector no longer exists STEP: starting local-client STEP: Checking that text file contents are perfect. Jun 15 16:57:17.493: INFO: Running '/home/fidencio/.local/bin/kubectl --server=https://api.kata-fidencio-0.qe.lab.redhat.com:6443 --kubeconfig=/home/fidencio/openshift/kata/clusterconfigs/auth/kubeconfig exec local-client --namespace=e2e-volume-6569 -- cat /opt/0/index.html' Jun 15 16:57:26.076: INFO: stderr: "" Jun 15 16:57:26.076: INFO: stdout: "Hello from local from namespace e2e-volume-6569\n" Jun 15 16:57:26.076: INFO: ExecWithOptions {Command:[/bin/sh -c test -d /opt/0] Namespace:e2e-volume-6569 PodName:local-client ContainerName:local-client Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jun 15 16:57:27.319: INFO: ExecWithOptions {Command:[/bin/sh -c test -b /opt/0] Namespace:e2e-volume-6569 PodName:local-client ContainerName:local-client Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} STEP: Checking fsType is correct. Jun 15 16:57:28.586: INFO: Running '/home/fidencio/.local/bin/kubectl --server=https://api.kata-fidencio-0.qe.lab.redhat.com:6443 --kubeconfig=/home/fidencio/openshift/kata/clusterconfigs/auth/kubeconfig exec local-client --namespace=e2e-volume-6569 -- grep /opt/0 /proc/mounts' Jun 15 16:57:30.533: INFO: stderr: "" Jun 15 16:57:30.533: INFO: stdout: "kataShared /opt/0 virtiofs rw,relatime 0 0\n" Jun 15 16:57:32.534: INFO: Running '/home/fidencio/.local/bin/kubectl --server=https://api.kata-fidencio-0.qe.lab.redhat.com:6443 --kubeconfig=/home/fidencio/openshift/kata/clusterconfigs/auth/kubeconfig exec local-client --namespace=e2e-volume-6569 -- grep /opt/0 /proc/mounts' Jun 15 16:57:37.847: INFO: stderr: "" Jun 15 16:57:37.847: INFO: stdout: "kataShared /opt/0 virtiofs rw,relatime 0 0\n" Jun 15 16:57:39.848: INFO: Running '/home/fidencio/.local/bin/kubectl --server=https://api.kata-fidencio-0.qe.lab.redhat.com:6443 --kubeconfig=/home/fidencio/openshift/kata/clusterconfigs/auth/kubeconfig exec local-client --namespace=e2e-volume-6569 -- grep /opt/0 /proc/mounts' Jun 15 16:57:42.060: INFO: stderr: "" Jun 15 16:57:42.060: INFO: stdout: "kataShared /opt/0 virtiofs rw,relatime 0 0\n" Jun 15 16:57:44.060: INFO: Running '/home/fidencio/.local/bin/kubectl --server=https://api.kata-fidencio-0.qe.lab.redhat.com:6443 --kubeconfig=/home/fidencio/openshift/kata/clusterconfigs/auth/kubeconfig exec local-client --namespace=e2e-volume-6569 -- grep /opt/0 /proc/mounts' Jun 15 16:57:45.974: INFO: stderr: "" Jun 15 16:57:45.974: INFO: stdout: "kataShared /opt/0 virtiofs rw,relatime 0 0\n" Jun 15 16:57:47.974: INFO: Running '/home/fidencio/.local/bin/kubectl --server=https://api.kata-fidencio-0.qe.lab.redhat.com:6443 --kubeconfig=/home/fidencio/openshift/kata/clusterconfigs/auth/kubeconfig exec local-client --namespace=e2e-volume-6569 -- grep /opt/0 /proc/mounts' Jun 15 16:57:49.893: INFO: stderr: "" Jun 15 16:57:49.893: INFO: stdout: "kataShared /opt/0 virtiofs rw,relatime 0 0\n" Jun 15 16:57:51.893: INFO: Running '/home/fidencio/.local/bin/kubectl --server=https://api.kata-fidencio-0.qe.lab.redhat.com:6443 --kubeconfig=/home/fidencio/openshift/kata/clusterconfigs/auth/kubeconfig exec local-client --namespace=e2e-volume-6569 -- grep /opt/0 /proc/mounts' Jun 15 16:57:53.614: INFO: stderr: "" Jun 15 16:57:53.614: INFO: stdout: "kataShared /opt/0 virtiofs rw,relatime 0 0\n" Jun 15 16:57:55.614: INFO: Running '/home/fidencio/.local/bin/kubectl --server=https://api.kata-fidencio-0.qe.lab.redhat.com:6443 --kubeconfig=/home/fidencio/openshift/kata/clusterconfigs/auth/kubeconfig exec local-client --namespace=e2e-volume-6569 -- grep /opt/0 /proc/mounts' Jun 15 16:57:57.381: INFO: stderr: "" Jun 15 16:57:57.381: INFO: stdout: "kataShared /opt/0 virtiofs rw,relatime 0 0\n" Jun 15 16:57:59.381: INFO: Running '/home/fidencio/.local/bin/kubectl --server=https://api.kata-fidencio-0.qe.lab.redhat.com:6443 --kubeconfig=/home/fidencio/openshift/kata/clusterconfigs/auth/kubeconfig exec local-client --namespace=e2e-volume-6569 -- grep /opt/0 /proc/mounts' Jun 15 16:58:01.960: INFO: stderr: "" Jun 15 16:58:01.960: INFO: stdout: "kataShared /opt/0 virtiofs rw,relatime 0 0\n" Jun 15 16:58:03.960: INFO: Running '/home/fidencio/.local/bin/kubectl --server=https://api.kata-fidencio-0.qe.lab.redhat.com:6443 --kubeconfig=/home/fidencio/openshift/kata/clusterconfigs/auth/kubeconfig exec local-client --namespace=e2e-volume-6569 -- grep /opt/0 /proc/mounts' Jun 15 16:58:05.841: INFO: stderr: "" Jun 15 16:58:05.841: INFO: stdout: "kataShared /opt/0 virtiofs rw,relatime 0 0\n" Jun 15 16:58:07.841: INFO: Running '/home/fidencio/.local/bin/kubectl --server=https://api.kata-fidencio-0.qe.lab.redhat.com:6443 --kubeconfig=/home/fidencio/openshift/kata/clusterconfigs/auth/kubeconfig exec local-client --namespace=e2e-volume-6569 -- grep /opt/0 /proc/mounts' Jun 15 16:58:09.756: INFO: stderr: "" Jun 15 16:58:09.756: INFO: stdout: "kataShared /opt/0 virtiofs rw,relatime 0 0\n" Jun 15 16:58:11.756: INFO: Running '/home/fidencio/.local/bin/kubectl --server=https://api.kata-fidencio-0.qe.lab.redhat.com:6443 --kubeconfig=/home/fidencio/openshift/kata/clusterconfigs/auth/kubeconfig exec local-client --namespace=e2e-volume-6569 -- grep /opt/0 /proc/mounts' Jun 15 16:58:13.555: INFO: stderr: "" Jun 15 16:58:13.555: INFO: stdout: "kataShared /opt/0 virtiofs rw,relatime 0 0\n" Jun 15 16:58:15.556: INFO: Running '/home/fidencio/.local/bin/kubectl --server=https://api.kata-fidencio-0.qe.lab.redhat.com:6443 --kubeconfig=/home/fidencio/openshift/kata/clusterconfigs/auth/kubeconfig exec local-client --namespace=e2e-volume-6569 -- grep /opt/0 /proc/mounts' Jun 15 16:58:17.520: INFO: stderr: "" Jun 15 16:58:17.520: INFO: stdout: "kataShared /opt/0 virtiofs rw,relatime 0 0\n" Jun 15 16:58:19.520: INFO: Running '/home/fidencio/.local/bin/kubectl --server=https://api.kata-fidencio-0.qe.lab.redhat.com:6443 --kubeconfig=/home/fidencio/openshift/kata/clusterconfigs/auth/kubeconfig exec local-client --namespace=e2e-volume-6569 -- grep /opt/0 /proc/mounts' Jun 15 16:58:21.295: INFO: stderr: "" Jun 15 16:58:21.295: INFO: stdout: "kataShared /opt/0 virtiofs rw,relatime 0 0\n" Jun 15 16:58:23.295: INFO: Running '/home/fidencio/.local/bin/kubectl --server=https://api.kata-fidencio-0.qe.lab.redhat.com:6443 --kubeconfig=/home/fidencio/openshift/kata/clusterconfigs/auth/kubeconfig exec local-client --namespace=e2e-volume-6569 -- grep /opt/0 /proc/mounts' Jun 15 16:58:25.099: INFO: stderr: "" Jun 15 16:58:25.099: INFO: stdout: "kataShared /opt/0 virtiofs rw,relatime 0 0\n" Jun 15 16:58:27.099: INFO: Running '/home/fidencio/.local/bin/kubectl --server=https://api.kata-fidencio-0.qe.lab.redhat.com:6443 --kubeconfig=/home/fidencio/openshift/kata/clusterconfigs/auth/kubeconfig exec local-client --namespace=e2e-volume-6569 -- grep /opt/0 /proc/mounts' Jun 15 16:58:29.017: INFO: stderr: "" Jun 15 16:58:29.017: INFO: stdout: "kataShared /opt/0 virtiofs rw,relatime 0 0\n" STEP: Deleting pod local-client in namespace e2e-volume-6569 Jun 15 16:58:31.228: INFO: Waiting for pod local-client to disappear Jun 15 16:58:31.423: INFO: Pod local-client still exists Jun 15 16:58:33.423: INFO: Waiting for pod local-client to disappear Jun 15 16:58:33.682: INFO: Pod local-client still exists Jun 15 16:58:35.423: INFO: Waiting for pod local-client to disappear Jun 15 16:58:35.620: INFO: Pod local-client still exists Jun 15 16:58:37.423: INFO: Waiting for pod local-client to disappear Jun 15 16:58:37.586: INFO: Pod local-client still exists Jun 15 16:58:39.423: INFO: Waiting for pod local-client to disappear Jun 15 16:58:39.616: INFO: Pod local-client still exists Jun 15 16:58:41.423: INFO: Waiting for pod local-client to disappear Jun 15 16:58:41.594: INFO: Pod local-client still exists Jun 15 16:58:43.423: INFO: Waiting for pod local-client to disappear Jun 15 16:58:43.718: INFO: Pod local-client no longer exists STEP: cleaning the environment after local STEP: Deleting pv and pvc Jun 15 16:58:43.718: INFO: Deleting PersistentVolumeClaim "pvc-mz6gm" Jun 15 16:58:43.961: INFO: Deleting PersistentVolume "local-4gwvn" Jun 15 16:58:44.226: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c E2E_LOOP_DEV=$(losetup | grep /tmp/local-driver-d9c5003e-926b-48b7-93a6-a7dde1d0aa9c/file | awk '{ print $1 }') 2>&1 > /dev/null && echo ${E2E_LOOP_DEV}] Namespace:e2e-volume-6569 PodName:hostexec-master-0-2-qzgvs ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true} STEP: Tear down block device "/dev/loop0" on node "master-0-2" at path /tmp/local-driver-d9c5003e-926b-48b7-93a6-a7dde1d0aa9c/file Jun 15 16:58:45.865: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c losetup -d /dev/loop0] Namespace:e2e-volume-6569 PodName:hostexec-master-0-2-qzgvs ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true} STEP: Removing the test directory /tmp/local-driver-d9c5003e-926b-48b7-93a6-a7dde1d0aa9c Jun 15 16:58:47.297: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-driver-d9c5003e-926b-48b7-93a6-a7dde1d0aa9c] Namespace:e2e-volume-6569 PodName:hostexec-master-0-2-qzgvs ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true} STEP: Deleting pod hostexec-master-0-2-qzgvs in namespace e2e-volume-6569 Jun 15 16:58:49.046: INFO: In-tree plugin kubernetes.io/local-volume is not migrated, not validating any metrics [AfterEach] [Testpattern: Pre-provisioned PV (ext4)] volumes /home/fidencio/src/upstream/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:179 STEP: Collecting events from namespace "e2e-volume-6569". STEP: Found 18 events. Jun 15 16:58:49.396: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for hostexec-master-0-2-qzgvs: {default-scheduler } Scheduled: Successfully assigned e2e-volume-6569/hostexec-master-0-2-qzgvs to master-0-2 Jun 15 16:58:49.396: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for local-client: {default-scheduler } Scheduled: Successfully assigned e2e-volume-6569/local-client to master-0-2 Jun 15 16:58:49.396: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for local-injector: {default-scheduler } Scheduled: Successfully assigned e2e-volume-6569/local-injector to master-0-2 Jun 15 16:58:49.396: INFO: At 2020-06-15 16:56:17 +0200 CEST - event for hostexec-master-0-2-qzgvs: {kubelet master-0-2} Pulled: Container image "us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12" already present on machine Jun 15 16:58:49.396: INFO: At 2020-06-15 16:56:18 +0200 CEST - event for hostexec-master-0-2-qzgvs: {kubelet master-0-2} Created: Created container agnhost Jun 15 16:58:49.396: INFO: At 2020-06-15 16:56:18 +0200 CEST - event for hostexec-master-0-2-qzgvs: {kubelet master-0-2} Started: Started container agnhost Jun 15 16:58:49.396: INFO: At 2020-06-15 16:56:22 +0200 CEST - event for pvc-mz6gm: {persistentvolume-controller } ProvisioningFailed: storageclass.storage.k8s.io "e2e-volume-6569" not found Jun 15 16:58:49.396: INFO: At 2020-06-15 16:56:33 +0200 CEST - event for local-injector: {multus } AddedInterface: Add eth0 [10.128.0.46/23] Jun 15 16:58:49.396: INFO: At 2020-06-15 16:56:34 +0200 CEST - event for local-injector: {kubelet master-0-2} Pulled: Container image "docker.io/library/busybox:1.29" already present on machine Jun 15 16:58:49.396: INFO: At 2020-06-15 16:56:34 +0200 CEST - event for local-injector: {kubelet master-0-2} Created: Created container local-injector Jun 15 16:58:49.396: INFO: At 2020-06-15 16:56:34 +0200 CEST - event for local-injector: {kubelet master-0-2} Started: Started container local-injector Jun 15 16:58:49.396: INFO: At 2020-06-15 16:56:48 +0200 CEST - event for local-injector: {kubelet master-0-2} Killing: Stopping container local-injector Jun 15 16:58:49.396: INFO: At 2020-06-15 16:57:07 +0200 CEST - event for local-client: {multus } AddedInterface: Add eth0 [10.128.0.46/23] Jun 15 16:58:49.396: INFO: At 2020-06-15 16:57:15 +0200 CEST - event for local-client: {kubelet master-0-2} Started: Started container local-client Jun 15 16:58:49.396: INFO: At 2020-06-15 16:57:15 +0200 CEST - event for local-client: {kubelet master-0-2} Pulled: Container image "docker.io/library/busybox:1.29" already present on machine Jun 15 16:58:49.396: INFO: At 2020-06-15 16:57:15 +0200 CEST - event for local-client: {kubelet master-0-2} Created: Created container local-client Jun 15 16:58:49.396: INFO: At 2020-06-15 16:58:31 +0200 CEST - event for local-client: {kubelet master-0-2} Killing: Stopping container local-client Jun 15 16:58:49.396: INFO: At 2020-06-15 16:58:48 +0200 CEST - event for hostexec-master-0-2-qzgvs: {kubelet master-0-2} Killing: Stopping container agnhost Jun 15 16:58:49.554: INFO: POD NODE PHASE GRACE CONDITIONS Jun 15 16:58:49.554: INFO: Jun 15 16:58:50.878: INFO: skipping dumping cluster info - cluster too large Jun 15 16:58:50.878: INFO: Waiting up to 7m0s for all (but 100) nodes to be ready STEP: Destroying namespace "e2e-volume-6569" for this suite. Jun 15 16:58:51.511: INFO: Running AfterSuite actions on all nodes Jun 15 16:58:51.511: INFO: Running AfterSuite actions on node 1 fail [k8s.io/kubernetes/test/e2e/framework/volume/fixtures.go:468]: failed: getting the right fsType ext4 Unexpected error: <*errors.errorString | 0xc0012b9710>: { s: "Failed to find \"ext4\", last result: \"kataShared /opt/0 virtiofs rw,relatime 0 0\n\"", } Failed to find "ext4", last result: "kataShared /opt/0 virtiofs rw,relatime 0 0 " occurred failed: (2m43s) 2020-06-15T14:58:51 "[sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Pre-provisioned PV (ext4)] volumes should store data [Suite:openshift/conformance/parallel] [Suite:k8s]" Timeline: Jun 15 16:56:08.852 - 162s I test="[sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Pre-provisioned PV (ext4)] volumes should store data [Suite:openshift/conformance/parallel] [Suite:k8s]" running Jun 15 14:56:17.280 I ns/e2e-volume-6569 pod/hostexec-master-0-2-qzgvs node/ reason/Created Jun 15 14:56:17.280 I ns/e2e-volume-6569 pod/hostexec-master-0-2-qzgvs node/master-0-2 reason/Scheduled Jun 15 14:56:17.969 I ns/e2e-volume-6569 pod/hostexec-master-0-2-qzgvs node/master-0-2 container/agnhost reason/Pulled image/us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12 Jun 15 14:56:18.160 I ns/e2e-volume-6569 pod/hostexec-master-0-2-qzgvs node/master-0-2 container/agnhost reason/Created Jun 15 14:56:18.200 I ns/e2e-volume-6569 pod/hostexec-master-0-2-qzgvs node/master-0-2 container/agnhost reason/Started Jun 15 14:56:19.059 I ns/e2e-volume-6569 pod/hostexec-master-0-2-qzgvs node/master-0-2 container/agnhost reason/Ready Jun 15 14:56:22.341 W ns/e2e-volume-6569 persistentvolumeclaim/pvc-mz6gm reason/ProvisioningFailed storageclass.storage.k8s.io "e2e-volume-6569" not found Jun 15 14:56:23.852 - 134s W ns/default pod/recycler-for-nfs-pv10 node/worker-0-0 pod has been pending longer than a minute Jun 15 14:56:32.111 I ns/e2e-volume-6569 pod/local-injector node/ reason/Created Jun 15 14:56:32.208 I ns/e2e-volume-6569 pod/local-injector node/master-0-2 reason/Scheduled Jun 15 14:56:34.076 I ns/e2e-volume-6569 pod/local-injector reason/AddedInterface Add eth0 [10.128.0.46/23] Jun 15 14:56:34.339 I ns/e2e-volume-6569 pod/local-injector node/master-0-2 container/local-injector reason/Pulled image/docker.io/library/busybox:1.29 Jun 15 14:56:34.581 I ns/e2e-volume-6569 pod/local-injector node/master-0-2 container/local-injector reason/Created Jun 15 14:56:34.596 I ns/e2e-volume-6569 pod/local-injector node/master-0-2 container/local-injector reason/Started Jun 15 14:56:35.145 I ns/e2e-volume-6569 pod/local-injector node/master-0-2 container/local-injector reason/Ready Jun 15 14:56:38.859 W ns/openshift-operator-lifecycle-manager pod/packageserver-84cd46d9b4-2slc2 node/master-0-1 reason/Unhealthy Readiness probe failed: Get https://10.130.0.11:5443/healthz: net/http: request canceled (Client.Timeout exceeded while awaiting headers) (427 times) Jun 15 14:56:39.295 W ns/openshift-operator-lifecycle-manager pod/packageserver-84cd46d9b4-2slc2 node/master-0-1 reason/Unhealthy Liveness probe failed: Get https://10.130.0.11:5443/healthz: net/http: request canceled (Client.Timeout exceeded while awaiting headers) (696 times) Jun 15 14:56:48.523 I ns/e2e-volume-6569 pod/local-injector node/master-0-2 container/local-injector reason/Killing Jun 15 14:56:48.523 W ns/e2e-volume-6569 pod/local-injector node/master-0-2 reason/GracefulDelete in 1s Jun 15 14:56:51.214 E ns/e2e-volume-6569 pod/local-injector node/master-0-2 container/local-injector container exited with code 137 (Error): Jun 15 14:56:51.504 W ns/default pod/recycler-for-nfs-pv10 node/worker-0-0 reason/FailedMount Unable to attach or mount volumes: unmounted volumes=[vol], unattached volumes=[vol default-token-w29vn]: timed out waiting for the condition (2 times) Jun 15 14:56:56.194 W ns/default pod/recycler-for-nfs-pv10 node/worker-0-0 reason/FailedMount (combined from similar events): MountVolume.SetUp failed for volume "vol" : mount failed: exit status 32\nMounting command: systemd-run\nMounting arguments: --description=Kubernetes transient mount for /var/lib/kubelet/pods/5bf6bb0f-67ab-4390-8412-c7073ad72ee9/volumes/kubernetes.io~nfs/vol --scope -- mount -t nfs registry.kata-fidencio-0.qe.lab.redhat.com:/mnt/pv10 /var/lib/kubelet/pods/5bf6bb0f-67ab-4390-8412-c7073ad72ee9/volumes/kubernetes.io~nfs/vol\nOutput: Running scope as unit: run-r85be62a5e79b43cc90b18f52046c4dcc.scope\nmount.nfs: mounting registry.kata-fidencio-0.qe.lab.redhat.com:/mnt/pv10 failed, reason given by server: No such file or directory\n (2 times) Jun 15 14:56:59.362 I ns/openshift-config-operator deployment/openshift-config-operator reason/KubeCloudConfigController openshift-config-managed/kube-cloud-config ConfigMap was deleted as no longer required (3836 times) Jun 15 14:57:03.146 W ns/e2e-volume-6569 pod/local-injector node/master-0-2 reason/Deleted Jun 15 14:57:05.099 I ns/e2e-volume-6569 pod/local-client node/ reason/Created Jun 15 14:57:05.256 I ns/e2e-volume-6569 pod/local-client node/master-0-2 reason/Scheduled Jun 15 14:57:07.500 I ns/e2e-volume-6569 pod/local-client reason/AddedInterface Add eth0 [10.128.0.46/23] Jun 15 14:57:15.123 I ns/e2e-volume-6569 pod/local-client node/master-0-2 container/local-client reason/Pulled image/docker.io/library/busybox:1.29 Jun 15 14:57:15.409 I ns/e2e-volume-6569 pod/local-client node/master-0-2 container/local-client reason/Created Jun 15 14:57:15.415 I ns/e2e-volume-6569 pod/local-client node/master-0-2 container/local-client reason/Started Jun 15 14:57:16.300 I ns/e2e-volume-6569 pod/local-client node/master-0-2 container/local-client reason/Ready Jun 15 14:57:29.860 I ns/openshift-console-operator deployment/console-operator reason/OperatorStatusChanged Status for clusteroperator/console changed: Degraded message changed from "" to "OAuthClientSyncDegraded: oauth client for console does not exist and cannot be created (rpc error: code = Unavailable desc = transport is closing)" Jun 15 14:57:29.860 I ns/openshift-authentication-operator deployment/authentication-operator reason/OperatorStatusChanged Status for clusteroperator/authentication changed: Degraded message changed from "" to "OperatorSyncDegraded: rpc error: code = Unavailable desc = transport is closing" (2 times) Jun 15 14:57:31.217 I ns/openshift-console-operator deployment/console-operator reason/OperatorStatusChanged Status for clusteroperator/console changed: Degraded message changed from "OAuthClientSyncDegraded: oauth client for console does not exist and cannot be created (rpc error: code = Unavailable desc = transport is closing)" to "" Jun 15 14:57:32.440 I ns/openshift-authentication-operator deployment/authentication-operator reason/OperatorStatusChanged Status for clusteroperator/authentication changed: Degraded message changed from "OperatorSyncDegraded: rpc error: code = Unavailable desc = transport is closing" to "" (2 times) Jun 15 14:57:59.373 I ns/openshift-config-operator deployment/openshift-config-operator reason/KubeCloudConfigController openshift-config-managed/kube-cloud-config ConfigMap was deleted as no longer required (3837 times) Jun 15 14:58:31.229 I ns/e2e-volume-6569 pod/local-client node/master-0-2 container/local-client reason/Killing Jun 15 14:58:31.231 W ns/e2e-volume-6569 pod/local-client node/master-0-2 reason/GracefulDelete in 1s Jun 15 14:58:31.631 E ns/e2e-volume-6569 pod/local-client node/master-0-2 container/local-client container exited with code 137 (Error): Jun 15 14:58:38.856 W ns/openshift-operator-lifecycle-manager pod/packageserver-84cd46d9b4-2slc2 node/master-0-1 reason/Unhealthy Readiness probe failed: Get https://10.130.0.11:5443/healthz: net/http: request canceled (Client.Timeout exceeded while awaiting headers) (428 times) Jun 15 14:58:39.313 W ns/openshift-operator-lifecycle-manager pod/packageserver-84cd46d9b4-2slc2 node/master-0-1 reason/Unhealthy Liveness probe failed: Get https://10.130.0.11:5443/healthz: net/http: request canceled (Client.Timeout exceeded while awaiting headers) (697 times) Jun 15 14:58:43.152 W ns/e2e-volume-6569 pod/local-client node/master-0-2 reason/Deleted Jun 15 14:58:49.043 W ns/e2e-volume-6569 pod/hostexec-master-0-2-qzgvs node/master-0-2 reason/GracefulDelete in 0s Jun 15 14:58:49.239 I ns/e2e-volume-6569 pod/hostexec-master-0-2-qzgvs node/master-0-2 container/agnhost reason/Killing Jun 15 14:58:49.239 W ns/e2e-volume-6569 pod/hostexec-master-0-2-qzgvs node/master-0-2 reason/Deleted Jun 15 14:58:49.239 I ns/e2e-volume-6569 pod/hostexec-master-0-2-qzgvs node/master-0-2 container/agnhost reason/Killing Jun 15 16:58:51.523 I test="[sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Pre-provisioned PV (ext4)] volumes should store data [Suite:openshift/conformance/parallel] [Suite:k8s]" failed Failing tests: [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Pre-provisioned PV (ext4)] volumes should store data [Suite:openshift/conformance/parallel] [Suite:k8s] error: 1 fail, 0 pass, 0 skip (2m43s)