KNOWLEDGE BASE

Extract refresh fail with the error 'containers exceeding thresholds' while refreshing Dashboard which connect to Databricks.


Published: 04 Aug 2021
Last Modified Date: 05 Aug 2021

Issue

Extract refresh failed with the error below upon refreshing Dashboard which connects to Databricks. 

[Simba][Hardy] (35) Error from server: error code: '0' error message: 'Error running query: org.apache.spark.SparkException: Job aborted due to stage failure.

Environment

  • Tableau Desktop
  • Databricks

Resolution

Solution from Tableau side:
Adding filters to avoid reaching the query thresholds which is managed by database side, and retrieving less number of columns.

Solution from database side:
Modify the Broadcast time settings in Azure cluster configuration.

Additional Information

snippet of  tabprotosrv.txt:

error-records : [{'sql-state-desc': 'SQLSTATE_GENERAL_ERROR_ODBC3x', 'sql-state': 'HY000', 'native-error': 35, 'error-desc': "[Simba][Hardy] (35) Error from server: error code: '0' error message: 'Error running query: org.apache.spark.SparkException: Job aborted due to stage failure: Task 56 in stage 41.0 failed 4 times, most recent failure: Lost task 56.3 in stage 41.0 (TID 474, 172.18.128.4, executor 1): ExecutorLostFailure (executor 1 exited caused by one of the running tasks) Reason: Remote RPC client disassociated. Likely due to containers exceeding thresholds, or network issues. Check driver logs for WARN messages.\nDriver stacktrace:'.", 'error-record': 1}]
Did this article resolve the issue?