I installed the SAML Tracer extension and analysed the SAML response. I seems that it is correctly signed. You can find it inside the file attached to this message.
Any help would be really appreciated,
Hello everyone
I'm trying to setup my admin account in Databricks Admin Console and am following these steps:
https://learn.microsoft.com/en-us/azure/databricks/administration-guide/#--what-are-account-admins
However, when I access accounts.azuredatabricks.net page, click "Sign in with Azure ID" button and select my personal account, I see the following error:
Selected user account does not exist in tenant 'Microsoft Services' and cannot access the application '2xx81xxx-x30x-xabx-x5cx-xxxe6f879xxx' in that tenant. The account needs to be added as an external user in the tenant first. Please use a different account.
I've been stuck for the second day actually and can't find any relevant information how to resolve the issue or configure my account in Azure AD.
Probably I should register an app in Azure AD? If yes, could someone point me to the documentation please.
I have a tenant in Azure AD, with a single user (which I'm trying to use for login), the user has just one role "Global Administrator".
Thanks for the help!
I came across custom_config option here but not sure how to use this for adding github url list. Can anyone help me with this?
My question is , if I don't want to create only private shares, I probably still have to be registered provider at databricks ?
your early response is very much appreciated.
Is there a way via CLI or API to call the global search https://<workspace-domain>/graphql/SearchGql so the result can be analysed automaticaly with a script? I couldn`t find a way in the documentation.
Right now I'm having to download all notebooks (API) to execute the search locally. The download itself takes too long (several hours) for there are thousands of notebooks across multiple workspaces. Even if i multithread, there's a limit of 30 requests per second. I also have to account for unsupported file formats and path size limitations.
So it would be best to use Databicks indexing to make this faster.
The request itself uses cookies to get the session ID and authenticate (I think) and I don't know how to reproduce it, say, in postman.
Is there a way to use a user token instead?
Thanks!
Hi,
I'm trying to export the SQL Queries in certain folders in workspace, but the list content API call
Hey
I'm using Databricks Runtime 13.2 https://docs.www.eheci.com/en/release-notes/runtime/13.2.html#system-environment
It uses JDK 8.
Question: Is it possible to upgrade the Java version to JDK 17?
Thanks!
Is there a ways to stop those machines? Does somebody have the same problem?
Br,
Aleksandar
This works:
"dbutils.fs.ls("s3://BUCKETNAME/dev/health")"
But within the same bucket we get the location overlap error when running: "dbutils.fs.ls("s3://BUCKETNAME/dev/claims/")"
I reviewed the article https://kb.www.eheci.com/en_US/unity-catalog/invalid_parameter_valuelocation_overlap-overlaps-with-managed-storage-error, but it's unclear to me why one path works. Is there something I should check other than IAM permissions for the bucket in question?
This should theoretically work according to the answers in this thread.
But unfortunately I get following error from Terraform for the resource "databricks_service_principal_role": Error: cannot read service principal role: Service Principal has no role
For me this error message is not very useful and I don't know what is wrong here. Is this maybe a bug in the Databricks Terraform provider?
Site notes (if relevant):
Looking at the source code on GitHub (Databricks Terraform provider) I found the error message from above but I don't understand why the "ReadContext" section in there is even executed.
It would be really nice if someone can help me, as I have to enable the Unity Catalog metastore very soon
Hello,
I see that Object Lock is not currently supported by Databricks:
https://kb.www.eheci.com/en_US/delta/object-lock-error-write-delta-s3
Is there any timeline / roadmap for the support of this feature?
Hi all,
I have a user who is enrolled in multiple groups within databricks but she is only able to query one of the query tables based on the original group assigned.
For example:
Group 1 will only be able to view data where [group] (column) = 'group1' (value)
Group 2 will only be able to view data where [group] (column) = 'group2' (value)
I have added this user in both group 1 and 2, so she should be able to query for both groups' data i.e. [group]= 'group1' or 'group2'. But she is only able to view 'group1' only.
Can someone advise? Thank you