Hi
I am new to databricks and trying to connect to Rstudio Server from my all-purpose compute cluster.
Here are the cluster configuration:
Policy: Personal Compute
Access mode: Single user
Databricks run time version:
> library(sparklyr) > sc <- spark_connect(method = "databricks")
Error in value[[3L]](cond) : Failed to start sparklyr backend: java.util.concurrent.ExecutionException: org.apache.spark.SparkException: There is no Credential Scope. at com.google.common.util.concurrent.AbstractFuture$Sync.getValue(AbstractFuture.java:299) at com.google.common.util.concurrent.AbstractFuture$Sync.get(AbstractFuture.java:286) at com.google.common.util.concurrent.AbstractFuture.get(AbstractFuture.java:116) at com.google.common.util.concurrent.Uninterruptibles.getUninterruptibly(Uninterruptibles.java:135) at com.google.common.cache.LocalCache$Segment.getAndRecordStats(LocalCache.java:2344) at com.google.common.cache.LocalCache$Segment.loadSync(LocalCache.java:2316) at com.google.common.cache.LocalCache$Segment.lockedGetOrLoad(LocalCache.java:2278) at com.google.common.cache.LocalCache$Segment.get(LocalCache.java:2193) at com.google.common.cache.LocalCache.get(LocalCache.java:3932) at com.google.common.cache.LocalCache.getOrLoad(LocalCache.java:3936) at com.google.common.cache.LocIn addition: Warning messages:1: In file.create(to[okay]) : cannot create file '/usr/local/lib/R/site-library/sparklyr/java//sparklyr-2.2-2.11.jar', reason 'Permission denied'2: In file.create(to[okay]) : cannot create file '/usr/local/lib/R/site-library/sparklyr/java//sparklyr-2.1-2.11.jar', reason 'Permission denied'
> library(SparkR) > sparkR.session() Java ref type org.apache.spark.sql.SparkSession id 1 > df <- SparkR::sql("SELECT * FROM default.diamonds LIMIT 2")
Error traceback
Error in handleErrors(returnStatus, conn) : org.apache.spark.sql.AnalysisException: There is no Credential Scope. ; line 1 pos 14 at org.apache.spark.sql.catalyst.analysis.package$AnalysisErrorAt.failAnalysis(package.scala:69) at org.apache.spark.sql.execution.datasources.ResolveSQLOnFile$$anonfun$apply$1.applyOrElse(rules.scala:172) at org.apache.spark.sql.execution.datasources.ResolveSQLOnFile$$anonfun$apply$1.applyOrElse(rules.scala:94) at org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper.$anonfun$resolveOperatorsDownWithPruning$2(AnalysisHelper.scala:219) at org.apache.spark.sql.catalyst.trees.CurrentOrigin$.withOrigin(TreeNode.scala:106) at org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper.$anonfun$resolveOperatorsDownWithPruning$1(AnalysisHelper.scala:219) at org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper$.allowInvokingTransformsInAnalyzer(AnalysisHelper.scala:372) at org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper.resolveOperatorsDownWithPruning(AnalysisHelper.scal
Can someone help me?
I can hear myself that this sounds a bit obscure, so I will demonstrate with an example:
We have three users: user_1, user_2 and user_3
We have two ad groups: group_a and group_b
We have two tables: tabel_a and table_b
user_1 and user_2 are in group_a with access to table_a. user_3 is in group_b with access to table_b. user_1 creates a new table, table_a1, from table_a. I want user_1 to be able to grant permission to table_a1 to user_2, but to user_3.
Is this possible? How do I set it up, if so?
Regards
Sridhar
We are working for global people from our homeland USA; to remind you that this the IT based only platform. You can from us Google service, Gmail service (old and new), reviews service (all kind of reviews), Bank service, and other social service; just connect us and pay fast to get quality service.
skype Live:usglobalshop
Telegram:@usglobalshop
WhatsApp:+1(929) 456-3093
Email:Usaglobalshop@Gmail.Com
Excited to hear all the new innovations from Databricks on Unity Catalog to improve Data Governance like Federation . The possibility to include external data sources is very powerful. Looking forward to see other solutions included like Oracle.