Replies: 2 comments 5 replies
-
Has this already been done by spark side? |
Beta Was this translation helpful? Give feedback.
4 replies
-
+1, has the same problem. |
Beta Was this translation helpful? Give feedback.
1 reply
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
We currently do not use kyuubi-spark-authz module yet, still use old submarine plugin. But I think there is a same issue on kyuubi-spark-authz plugin.
Spark on K8S, Spark run on the container that has no specific user other than UID
185
. In our case during ranger check, get groups of current user throughUserGroupInformation.getGroupNames()
. When doing thisUserGroupInformation
uses default configuration if not set, thenShellBasedUnixGroupsMapping
is used and print warning message like this:This is totally OK, but messy with long(86 lines) stack trace logs. So we make a small custom GroupsMapping class to address this. To apply it to the ranger plugin, we need to call
UserGroupInformation.setConfiguration(spark.sparkContext.hadoopConfiguration)
before use it. I think this is the right way to obtain hadoop configurations rather than confs from*-site.xml
files or something else.So I think kyuubi-spark-authz module also needs this call. If agreed, I'll make issue and PR.
Thank you
Beta Was this translation helpful? Give feedback.
All reactions