"[sig-storage] In-tree Volumes [Driver: gluster] [Testpattern: Inline-volume (default fs)] volumes should allow exec of files on the volume [Skipped:ibmcloud] [Suite:openshift/conformance/parallel] [Suite:k8s]" started: (0/1/1) "[sig-storage] In-tree Volumes [Driver: gluster] [Testpattern: Inline-volume (default fs)] volumes should allow exec of files on the volume [Skipped:ibmcloud] [Suite:openshift/conformance/parallel] [Suite:k8s]" I0615 16:31:24.792034 662980 test_context.go:423] Tolerating taints "node-role.kubernetes.io/master" when considering if nodes are ready Jun 15 16:31:24.823: INFO: Waiting up to 30m0s for all (but 100) nodes to be schedulable Jun 15 16:31:26.174: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Jun 15 16:31:26.716: INFO: 0 / 0 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) Jun 15 16:31:26.716: INFO: expected 0 pod replicas in namespace 'kube-system', 0 are Running and Ready. Jun 15 16:31:26.716: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start Jun 15 16:31:26.949: INFO: e2e test version: v1.18.3 Jun 15 16:31:27.116: INFO: kube-apiserver version: v1.18.3+a637491 Jun 15 16:31:27.290: INFO: Cluster IP family: ipv4 [BeforeEach] [Top Level] /home/fidencio/src/upstream/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/framework.go:1413 [BeforeEach] [Top Level] /home/fidencio/src/upstream/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/framework.go:1413 [BeforeEach] [Top Level] /home/fidencio/src/upstream/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:58 [BeforeEach] [Testpattern: Inline-volume (default fs)] volumes /home/fidencio/src/upstream/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:95 [BeforeEach] [Testpattern: Inline-volume (default fs)] volumes /home/fidencio/src/upstream/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client STEP: Building a namespace api object, basename volume Jun 15 16:31:27.857: INFO: About to run a Kube e2e test, ensuring namespace is privileged Jun 15 16:31:30.685: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [It] should allow exec of files on the volume [Skipped:ibmcloud] [Suite:openshift/conformance/parallel] [Suite:k8s] /home/fidencio/src/upstream/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:193 Jun 15 16:31:30.927: INFO: Could not find CSI Name for in-tree plugin kubernetes.io/glusterfs STEP: creating gluster-server pod STEP: locating the "gluster-server" server pod Jun 15 16:31:36.458: INFO: gluster server pod IP address: 10.130.0.19 STEP: creating Gluster endpoints Jun 15 16:31:36.659: INFO: Creating resource for inline volume STEP: Creating pod exec-volume-test-inlinevolume-zdhd STEP: Creating a pod to test exec-volume-test Jun 15 16:31:36.845: INFO: Waiting up to 5m0s for pod "exec-volume-test-inlinevolume-zdhd" in namespace "e2e-volume-6119" to be "Succeeded or Failed" Jun 15 16:31:37.015: INFO: Pod "exec-volume-test-inlinevolume-zdhd": Phase="Pending", Reason="", readiness=false. Elapsed: 169.800854ms Jun 15 16:31:39.185: INFO: Pod "exec-volume-test-inlinevolume-zdhd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.340079285s Jun 15 16:31:41.370: INFO: Pod "exec-volume-test-inlinevolume-zdhd": Phase="Pending", Reason="", readiness=false. Elapsed: 4.52505877s Jun 15 16:31:43.606: INFO: Pod "exec-volume-test-inlinevolume-zdhd": Phase="Pending", Reason="", readiness=false. Elapsed: 6.760381813s Jun 15 16:31:45.800: INFO: Pod "exec-volume-test-inlinevolume-zdhd": Phase="Pending", Reason="", readiness=false. Elapsed: 8.954504944s Jun 15 16:31:48.049: INFO: Pod "exec-volume-test-inlinevolume-zdhd": Phase="Running", Reason="", readiness=true. Elapsed: 11.203295563s Jun 15 16:31:50.301: INFO: Pod "exec-volume-test-inlinevolume-zdhd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 13.456089202s STEP: Saw pod success Jun 15 16:31:50.301: INFO: Pod "exec-volume-test-inlinevolume-zdhd" satisfied condition "Succeeded or Failed" Jun 15 16:31:50.590: INFO: Trying to get logs from node worker-0-0 pod exec-volume-test-inlinevolume-zdhd container exec-container-inlinevolume-zdhd: STEP: delete the pod Jun 15 16:31:50.943: INFO: Waiting for pod exec-volume-test-inlinevolume-zdhd to disappear Jun 15 16:31:51.200: INFO: Pod exec-volume-test-inlinevolume-zdhd no longer exists Jun 15 16:31:51.200: INFO: Deleting Gluster endpoints "gluster-server"... Jun 15 16:31:51.405: INFO: Deleting Gluster server pod "gluster-server"... Jun 15 16:31:51.405: INFO: Deleting pod "gluster-server" in namespace "e2e-volume-6119" Jun 15 16:31:51.580: INFO: Wait up to 5m0s for pod "gluster-server" to be fully deleted Jun 15 16:32:00.008: INFO: In-tree plugin kubernetes.io/glusterfs is not migrated, not validating any metrics [AfterEach] [Testpattern: Inline-volume (default fs)] volumes /home/fidencio/src/upstream/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:179 STEP: Collecting events from namespace "e2e-volume-6119". STEP: Found 11 events. Jun 15 16:32:00.216: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for exec-volume-test-inlinevolume-zdhd: {default-scheduler } Scheduled: Successfully assigned e2e-volume-6119/exec-volume-test-inlinevolume-zdhd to worker-0-0 Jun 15 16:32:00.216: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for gluster-server: {default-scheduler } Scheduled: Successfully assigned e2e-volume-6119/gluster-server to master-0-1 Jun 15 16:32:00.216: INFO: At 2020-06-15 16:31:32 +0200 CEST - event for gluster-server: {multus } AddedInterface: Add eth0 [10.130.0.19/23] Jun 15 16:32:00.216: INFO: At 2020-06-15 16:31:33 +0200 CEST - event for gluster-server: {kubelet master-0-1} Pulled: Container image "gcr.io/kubernetes-e2e-test-images/volume/gluster:1.0" already present on machine Jun 15 16:32:00.216: INFO: At 2020-06-15 16:31:33 +0200 CEST - event for gluster-server: {kubelet master-0-1} Created: Created container gluster-server Jun 15 16:32:00.216: INFO: At 2020-06-15 16:31:33 +0200 CEST - event for gluster-server: {kubelet master-0-1} Started: Started container gluster-server Jun 15 16:32:00.216: INFO: At 2020-06-15 16:31:38 +0200 CEST - event for exec-volume-test-inlinevolume-zdhd: {multus } AddedInterface: Add eth0 [10.128.2.7/23] Jun 15 16:32:00.216: INFO: At 2020-06-15 16:31:46 +0200 CEST - event for exec-volume-test-inlinevolume-zdhd: {kubelet worker-0-0} Started: Started container exec-container-inlinevolume-zdhd Jun 15 16:32:00.216: INFO: At 2020-06-15 16:31:46 +0200 CEST - event for exec-volume-test-inlinevolume-zdhd: {kubelet worker-0-0} Pulled: Container image "docker.io/library/nginx:1.14-alpine" already present on machine Jun 15 16:32:00.216: INFO: At 2020-06-15 16:31:46 +0200 CEST - event for exec-volume-test-inlinevolume-zdhd: {kubelet worker-0-0} Created: Created container exec-container-inlinevolume-zdhd Jun 15 16:32:00.216: INFO: At 2020-06-15 16:31:51 +0200 CEST - event for gluster-server: {kubelet master-0-1} Killing: Stopping container gluster-server Jun 15 16:32:00.415: INFO: POD NODE PHASE GRACE CONDITIONS Jun 15 16:32:00.415: INFO: Jun 15 16:32:00.945: INFO: skipping dumping cluster info - cluster too large Jun 15 16:32:00.945: INFO: Waiting up to 7m0s for all (but 100) nodes to be ready STEP: Destroying namespace "e2e-volume-6119" for this suite. Jun 15 16:32:01.516: INFO: Running AfterSuite actions on all nodes Jun 15 16:32:01.516: INFO: Running AfterSuite actions on node 1 fail [k8s.io/kubernetes/test/e2e/framework/util.go:798]: Unexpected error: <*errors.errorString | 0xc0021f41c0>: { s: "expected \"test-inlinevolume-zdhd\" in container output: Expected\n : failed to get parse function: unsupported log format: \"index.html\\n\"\nto contain substring\n : test-inlinevolume-zdhd", } expected "test-inlinevolume-zdhd" in container output: Expected : failed to get parse function: unsupported log format: "index.html\n" to contain substring : test-inlinevolume-zdhd occurred failed: (36.9s) 2020-06-15T14:32:01 "[sig-storage] In-tree Volumes [Driver: gluster] [Testpattern: Inline-volume (default fs)] volumes should allow exec of files on the volume [Skipped:ibmcloud] [Suite:openshift/conformance/parallel] [Suite:k8s]" Timeline: Jun 15 16:31:24.662 - 36s I test="[sig-storage] In-tree Volumes [Driver: gluster] [Testpattern: Inline-volume (default fs)] volumes should allow exec of files on the volume [Skipped:ibmcloud] [Suite:openshift/conformance/parallel] [Suite:k8s]" running Jun 15 14:31:31.135 I ns/e2e-volume-6119 pod/gluster-server node/master-0-1 reason/Scheduled Jun 15 14:31:31.140 I ns/e2e-volume-6119 pod/gluster-server node/ reason/Created Jun 15 14:31:31.739 W ns/default pod/recycler-for-nfs-pv10 node/worker-0-0 reason/FailedMount Unable to attach or mount volumes: unmounted volumes=[vol], unattached volumes=[vol default-token-w29vn]: timed out waiting for the condition (4 times) Jun 15 14:31:32.180 I ns/default pod/recycler-for-nfs-pv10 node/worker-0-0 reason/DeadlineExceeded Pod was active on the node longer than the specified deadline Jun 15 14:31:32.187 I persistentvolume/nfs-pv10 reason/RecyclerPod Recycler pod: Pod was active on the node longer than the specified deadline (2963 times) Jun 15 14:31:32.188 E ns/default pod/recycler-for-nfs-pv10 node/worker-0-0 reason/Failed (DeadlineExceeded): Pod was active on the node longer than the specified deadline Jun 15 14:31:32.190 W ns/default pod/recycler-for-nfs-pv10 node/worker-0-0 reason/GracefulDelete in 0s Jun 15 14:31:32.337 W ns/default pod/recycler-for-nfs-pv10 node/worker-0-0 reason/Deleted Jun 15 14:31:32.768 I ns/e2e-volume-6119 pod/gluster-server reason/AddedInterface Add eth0 [10.130.0.19/23] Jun 15 14:31:33.177 I ns/e2e-volume-6119 pod/gluster-server node/master-0-1 container/gluster-server reason/Pulled image/gcr.io/kubernetes-e2e-test-images/volume/gluster:1.0 Jun 15 14:31:33.281 I ns/e2e-volume-6119 pod/gluster-server node/master-0-1 container/gluster-server reason/Created Jun 15 14:31:33.316 I ns/e2e-volume-6119 pod/gluster-server node/master-0-1 container/gluster-server reason/Started Jun 15 14:31:34.109 I ns/e2e-volume-6119 pod/gluster-server node/master-0-1 container/gluster-server reason/Ready Jun 15 14:31:36.835 I ns/e2e-volume-6119 pod/exec-volume-test-inlinevolume-zdhd node/ reason/Created Jun 15 14:31:36.845 I ns/e2e-volume-6119 pod/exec-volume-test-inlinevolume-zdhd node/worker-0-0 reason/Scheduled Jun 15 14:31:38.963 I ns/e2e-volume-6119 pod/exec-volume-test-inlinevolume-zdhd reason/AddedInterface Add eth0 [10.128.2.7/23] Jun 15 14:31:44.670 I ns/default pod/recycler-for-nfs-pv10 node/ reason/Created Jun 15 14:31:44.671 W persistentvolume/nfs-pv10 reason/RecyclerPod (combined from similar events): Recycler pod: MountVolume.SetUp failed for volume "vol" : mount failed: exit status 32\nMounting command: systemd-run\nMounting arguments: --description=Kubernetes transient mount for /var/lib/kubelet/pods/7be48f32-fd4d-438f-a71a-3f35538ee6ae/volumes/kubernetes.io~nfs/vol --scope -- mount -t nfs registry.kata-fidencio-0.qe.lab.redhat.com:/mnt/pv10 /var/lib/kubelet/pods/7be48f32-fd4d-438f-a71a-3f35538ee6ae/volumes/kubernetes.io~nfs/vol\nOutput: Running scope as unit: run-r7c84f2a7fb6342c98941e06ceb196456.scope\nmount.nfs: mounting registry.kata-fidencio-0.qe.lab.redhat.com:/mnt/pv10 failed, reason given by server: No such file or directory\n (28327 times) Jun 15 14:31:44.857 I ns/default pod/recycler-for-nfs-pv10 node/master-0-1 reason/Scheduled Jun 15 14:31:45.156 W ns/default pod/recycler-for-nfs-pv10 node/master-0-1 reason/FailedMount MountVolume.SetUp failed for volume "vol" : mount failed: exit status 32\nMounting command: systemd-run\nMounting arguments: --description=Kubernetes transient mount for /var/lib/kubelet/pods/e7d43eb1-9eca-4899-b40a-c4d0b8932678/volumes/kubernetes.io~nfs/vol --scope -- mount -t nfs registry.kata-fidencio-0.qe.lab.redhat.com:/mnt/pv10 /var/lib/kubelet/pods/e7d43eb1-9eca-4899-b40a-c4d0b8932678/volumes/kubernetes.io~nfs/vol\nOutput: Running scope as unit: run-raa8630e0cc8a497b931b43967e68e6d9.scope\nmount.nfs: mounting registry.kata-fidencio-0.qe.lab.redhat.com:/mnt/pv10 failed, reason given by server: No such file or directory\n Jun 15 14:31:45.772 W ns/default pod/recycler-for-nfs-pv10 node/master-0-1 reason/FailedMount MountVolume.SetUp failed for volume "vol" : mount failed: exit status 32\nMounting command: systemd-run\nMounting arguments: --description=Kubernetes transient mount for /var/lib/kubelet/pods/e7d43eb1-9eca-4899-b40a-c4d0b8932678/volumes/kubernetes.io~nfs/vol --scope -- mount -t nfs registry.kata-fidencio-0.qe.lab.redhat.com:/mnt/pv10 /var/lib/kubelet/pods/e7d43eb1-9eca-4899-b40a-c4d0b8932678/volumes/kubernetes.io~nfs/vol\nOutput: Running scope as unit: run-r37cfe06898bb4e5a8089f414cf2c4d7d.scope\nmount.nfs: mounting registry.kata-fidencio-0.qe.lab.redhat.com:/mnt/pv10 failed, reason given by server: No such file or directory\n Jun 15 14:31:46.652 I ns/e2e-volume-6119 pod/exec-volume-test-inlinevolume-zdhd node/worker-0-0 container/exec-container-inlinevolume-zdhd reason/Pulled image/docker.io/library/nginx:1.14-alpine Jun 15 14:31:46.884 W ns/default pod/recycler-for-nfs-pv10 node/master-0-1 reason/FailedMount MountVolume.SetUp failed for volume "vol" : mount failed: exit status 32\nMounting command: systemd-run\nMounting arguments: --description=Kubernetes transient mount for /var/lib/kubelet/pods/e7d43eb1-9eca-4899-b40a-c4d0b8932678/volumes/kubernetes.io~nfs/vol --scope -- mount -t nfs registry.kata-fidencio-0.qe.lab.redhat.com:/mnt/pv10 /var/lib/kubelet/pods/e7d43eb1-9eca-4899-b40a-c4d0b8932678/volumes/kubernetes.io~nfs/vol\nOutput: Running scope as unit: run-rb6e667608259449e80799989155d344c.scope\nmount.nfs: mounting registry.kata-fidencio-0.qe.lab.redhat.com:/mnt/pv10 failed, reason given by server: No such file or directory\n Jun 15 14:31:47.024 I ns/e2e-volume-6119 pod/exec-volume-test-inlinevolume-zdhd node/worker-0-0 container/exec-container-inlinevolume-zdhd reason/Created Jun 15 14:31:47.025 I ns/e2e-volume-6119 pod/exec-volume-test-inlinevolume-zdhd node/worker-0-0 container/exec-container-inlinevolume-zdhd reason/Started Jun 15 14:31:47.237 I ns/e2e-volume-6119 pod/exec-volume-test-inlinevolume-zdhd node/worker-0-0 container/exec-container-inlinevolume-zdhd reason/Ready Jun 15 14:31:49.075 W ns/default pod/recycler-for-nfs-pv10 node/master-0-1 reason/FailedMount MountVolume.SetUp failed for volume "vol" : mount failed: exit status 32\nMounting command: systemd-run\nMounting arguments: --description=Kubernetes transient mount for /var/lib/kubelet/pods/e7d43eb1-9eca-4899-b40a-c4d0b8932678/volumes/kubernetes.io~nfs/vol --scope -- mount -t nfs registry.kata-fidencio-0.qe.lab.redhat.com:/mnt/pv10 /var/lib/kubelet/pods/e7d43eb1-9eca-4899-b40a-c4d0b8932678/volumes/kubernetes.io~nfs/vol\nOutput: Running scope as unit: run-rc698fd3dfe594ee982249f2f8a4d5f97.scope\nmount.nfs: mounting registry.kata-fidencio-0.qe.lab.redhat.com:/mnt/pv10 failed, reason given by server: No such file or directory\n Jun 15 14:31:50.966 W ns/e2e-volume-6119 pod/exec-volume-test-inlinevolume-zdhd node/worker-0-0 reason/GracefulDelete in 0s Jun 15 14:31:50.968 W ns/e2e-volume-6119 pod/exec-volume-test-inlinevolume-zdhd node/worker-0-0 reason/Deleted Jun 15 14:31:51.575 W ns/e2e-volume-6119 pod/gluster-server node/master-0-1 reason/GracefulDelete in 30s Jun 15 14:31:51.576 I ns/e2e-volume-6119 pod/gluster-server node/master-0-1 container/gluster-server reason/Killing Jun 15 14:31:53.281 W ns/default pod/recycler-for-nfs-pv10 node/master-0-1 reason/FailedMount MountVolume.SetUp failed for volume "vol" : mount failed: exit status 32\nMounting command: systemd-run\nMounting arguments: --description=Kubernetes transient mount for /var/lib/kubelet/pods/e7d43eb1-9eca-4899-b40a-c4d0b8932678/volumes/kubernetes.io~nfs/vol --scope -- mount -t nfs registry.kata-fidencio-0.qe.lab.redhat.com:/mnt/pv10 /var/lib/kubelet/pods/e7d43eb1-9eca-4899-b40a-c4d0b8932678/volumes/kubernetes.io~nfs/vol\nOutput: Running scope as unit: run-r588f8a72cb2448cfb49561e0fbed7e16.scope\nmount.nfs: mounting registry.kata-fidencio-0.qe.lab.redhat.com:/mnt/pv10 failed, reason given by server: No such file or directory\n Jun 15 14:31:59.294 W ns/e2e-volume-6119 pod/gluster-server node/master-0-1 reason/Deleted Jun 15 14:31:59.340 I ns/openshift-config-operator deployment/openshift-config-operator reason/KubeCloudConfigController openshift-config-managed/kube-cloud-config ConfigMap was deleted as no longer required (3803 times) Jun 15 14:32:01.353 W ns/default pod/recycler-for-nfs-pv10 node/master-0-1 reason/FailedMount MountVolume.SetUp failed for volume "vol" : mount failed: exit status 32\nMounting command: systemd-run\nMounting arguments: --description=Kubernetes transient mount for /var/lib/kubelet/pods/e7d43eb1-9eca-4899-b40a-c4d0b8932678/volumes/kubernetes.io~nfs/vol --scope -- mount -t nfs registry.kata-fidencio-0.qe.lab.redhat.com:/mnt/pv10 /var/lib/kubelet/pods/e7d43eb1-9eca-4899-b40a-c4d0b8932678/volumes/kubernetes.io~nfs/vol\nOutput: Running scope as unit: run-r163197439bb04a1b9ee9bb484af6ffcb.scope\nmount.nfs: mounting registry.kata-fidencio-0.qe.lab.redhat.com:/mnt/pv10 failed, reason given by server: No such file or directory\n Jun 15 16:32:01.530 I test="[sig-storage] In-tree Volumes [Driver: gluster] [Testpattern: Inline-volume (default fs)] volumes should allow exec of files on the volume [Skipped:ibmcloud] [Suite:openshift/conformance/parallel] [Suite:k8s]" failed Failing tests: [sig-storage] In-tree Volumes [Driver: gluster] [Testpattern: Inline-volume (default fs)] volumes should allow exec of files on the volume [Skipped:ibmcloud] [Suite:openshift/conformance/parallel] [Suite:k8s] error: 1 fail, 0 pass, 0 skip (36.9s)