• Home
  • About Us
  • Contact Us
  • Disclaimer
  • Privacy Policy
Monday, June 9, 2025
newsaiworld
  • Home
  • Artificial Intelligence
  • ChatGPT
  • Data Science
  • Machine Learning
  • Crypto Coins
  • Contact Us
No Result
View All Result
  • Home
  • Artificial Intelligence
  • ChatGPT
  • Data Science
  • Machine Learning
  • Crypto Coins
  • Contact Us
No Result
View All Result
Morning News
No Result
View All Result
Home Artificial Intelligence

No Extra Tableau Downtime: Metadata API for Proactive Information Well being

Admin by Admin
March 21, 2025
in Artificial Intelligence
0
Image 20.jpeg
0
SHARES
0
VIEWS
Share on FacebookShare on Twitter


In at this time’s world, the reliability of knowledge options is all the pieces. Once we construct dashboards and reviews, one expects that the numbers mirrored there are appropriate and up-to-date. Primarily based on these numbers, insights are drawn and actions are taken. For any unexpected cause, if the dashboards are damaged or if the numbers are incorrect — then it turns into a fire-fight to repair all the pieces. If the problems usually are not mounted in time, then it damages the belief positioned on the info staff and their options. 

However why would dashboards be damaged or have fallacious numbers? If the dashboard was constructed accurately the primary time, then 99% of the time the problem comes from the info that feeds the dashboards — from the info warehouse. Some doable situations are:

  • Few ETL pipelines failed, so the brand new knowledge isn’t but in
  • A desk is changed with one other new one 
  • Some columns within the desk are dropped or renamed
  • Schemas in knowledge warehouse have modified
  • And lots of extra.

There may be nonetheless an opportunity that the problem is on the Tableau website, however in my expertise, many of the occasions, it’s all the time because of some adjustments in knowledge warehouse. Although we all know the foundation trigger, it’s not all the time easy to begin engaged on a repair. There may be no central place the place you possibly can test which Tableau knowledge sources depend on particular tables. You probably have the Tableau Information Administration add-on, it might assist, however from what I do know, its laborious to search out dependencies of customized sql queries utilized in knowledge sources.

Nonetheless, the add-on is simply too costly and most corporations don’t have it. The true ache begins when you need to undergo all the info sources manually to begin fixing it. On prime of it, you’ve got a string of customers in your head impatiently ready for a quick-fix. The repair itself won’t be troublesome, it might simply be a time-consuming one.

What if we might anticipate these points and determine impacted knowledge sources earlier than anybody notices an issue? Wouldn’t that simply be nice? Nicely, there’s a means now with the Tableau Metadata API. The Metadata API makes use of GraphQL, a question language for APIs that returns solely the info that you simply’re enthusiastic about. For more information on what’s doable with GraphQL, do try GraphQL.org.

On this weblog put up, I’ll present you ways to connect with the Tableau Metadata API utilizing Python’s Tableau Server Shopper (TSC) library to proactively determine knowledge sources utilizing particular tables, as a way to act quick earlier than any points come up. As soon as which Tableau knowledge sources are affected by a selected desk, you can also make some updates your self or alert the house owners of these knowledge sources concerning the upcoming adjustments to allow them to be ready for it.

Connecting to the Tableau Metadata API

Lets hook up with the Tableau Server utilizing TSC. We have to import in all of the libraries we would wish for the train!

### Import all required libraries
import tableauserverclient as t
import pandas as pd
import json
import ast
import re

In an effort to hook up with the Metadata API, you’ll have to first create a private entry token in your Tableau Account settings. Then replace the & with the token you simply created. Additionally replace along with your Tableau website. If the connection is established efficiently, then “Linked” can be printed within the output window.

### Hook up with Tableau server utilizing private entry token
tableau_auth = t.PersonalAccessTokenAuth("", "", 
                                           site_id="")
server = t.Server("https://dub01.on-line.tableau.com/", use_server_version=True)

with server.auth.sign_in(tableau_auth):
        print("Linked")

Lets now get an inventory of all knowledge sources which might be revealed in your website. There are lots of attributes you possibly can fetch, however for the present use case, lets preserve it easy and solely get the id, title and proprietor contact info for each knowledge supply. This can be our grasp record to which we’ll add in all different info.

############### Get all of the record of knowledge sources in your Web site

all_datasources_query = """ {
  publishedDatasources {
    title
    id
    proprietor {
    title
    electronic mail
    }
  }
}"""
with server.auth.sign_in(tableau_auth):
    end result = server.metadata.question(
        all_datasources_query
    )

Since I would like this weblog to be focussed on the way to proactively determine which knowledge sources are affected by a selected desk, I’ll not be going into the nuances of Metadata API. To raised perceive how the question works, you possibly can consult with a really detailed Tableau’s personal Metadata API documentation.

One factor to notice is that the Metadata API returns knowledge in a JSON format. Relying on what you’re querying, you’ll find yourself with a number of nested json lists and it may well get very tough to transform this right into a pandas dataframe. For the above metadata question, you’ll find yourself with a end result which would really like under (that is mock knowledge simply to offer you an thought of what the output appears like):

{
  "knowledge": {
    "publishedDatasources": [
      {
        "name": "Sales Performance DataSource",
        "id": "f3b1a2c4-1234-5678-9abc-1234567890ab",
        "owner": {
          "name": "Alice Johnson",
          "email": "[email protected]"
        }
      },
      {
        "title": "Buyer Orders DataSource",
        "id": "a4d2b3c5-2345-6789-abcd-2345678901bc",
        "proprietor": {
          "title": "Bob Smith",
          "electronic mail": "[email protected]"
        }
      },
      {
        "title": "Product Returns and Profitability",
        "id": "c5e3d4f6-3456-789a-bcde-3456789012cd",
        "proprietor": {
          "title": "Alice Johnson",
          "electronic mail": "[email protected]"
        }
      },
      {
        "title": "Buyer Segmentation Evaluation",
        "id": "d6f4e5a7-4567-89ab-cdef-4567890123de",
        "proprietor": {
          "title": "Charlie Lee",
          "electronic mail": "[email protected]"
        }
      },
      {
        "title": "Regional Gross sales Developments (Customized SQL)",
        "id": "e7a5f6b8-5678-9abc-def0-5678901234ef",
        "proprietor": {
          "title": "Bob Smith",
          "electronic mail": "[email protected]"
        }
      }
    ]
  }
}

We have to convert this JSON response right into a dataframe in order that its simple to work with. Discover that we have to extract the title and electronic mail of the proprietor from contained in the proprietor object. 

### We have to convert the response into dataframe for straightforward knowledge manipulation

col_names = end result['data']['publishedDatasources'][0].keys()
master_df = pd.DataFrame(columns=col_names)

for i in end result['data']['publishedDatasources']:
    tmp_dt = {okay:v for okay,v in i.gadgets()}
    master_df = pd.concat([master_df, pd.DataFrame.from_dict(tmp_dt, orient='index').T])

# Extract the proprietor title and electronic mail from the proprietor object
master_df['owner_name'] = master_df['owner'].apply(lambda x: x.get('title') if isinstance(x, dict) else None)
master_df['owner_email'] = master_df['owner'].apply(lambda x: x.get('electronic mail') if isinstance(x, dict) else None)

master_df.reset_index(inplace=True)
master_df.drop(['index','owner'], axis=1, inplace=True)
print('There are ', master_df.form[0] , ' datasources in your website')

That is how the construction of master_df would appear to be:

Pattern output of code

As soon as we’ve the principle record prepared, we are able to go forward and begin getting the names of the tables embedded within the knowledge sources. In case you are an avid Tableau person, that there are two methods to deciding on tables in a Tableau knowledge supply — one is to straight select the tables and set up a relation between them and the opposite is to make use of a customized sql question with a number of tables to attain a brand new resultant desk. Subsequently, we have to deal with each the instances.

Processing of Customized SQL question tables

Under is the question to get the record of all customized SQLs used within the website together with their knowledge sources. Discover that I’ve filtered the record to get solely first 500 customized sql queries. In case there are extra in your org, you’ll have to use an offset to get the subsequent set of customized sql queries. There may be additionally an choice of utilizing cursor methodology in Pagination if you wish to fetch giant record of outcomes (refer right here). For the sake of simplicity, I simply use the offset methodology as I do know, as there are lower than 500 customized sql queries used on the positioning.

# Get the info sources and the desk names from all of the customized sql queries used in your Web site

custom_table_query = """  {
  customSQLTablesConnection(first: 500){
    nodes {
        id
        title
        downstreamDatasources {
        title
        }
        question
    }
  }
}
"""

with server.auth.sign_in(tableau_auth):
    custom_table_query_result = server.metadata.question(
        custom_table_query
    )

Primarily based on our mock knowledge, that is how our output would appear to be:

{
  "knowledge": {
    "customSQLTablesConnection": {
      "nodes": [
        {
          "id": "csql-1234",
          "name": "RegionalSales_CustomSQL",
          "downstreamDatasources": [
            {
              "name": "Regional Sales Trends (Custom SQL)"
            }
          ],
          "question": "SELECT r.region_name, SUM(s.sales_amount) AS total_sales FROM ecommerce.sales_data.Gross sales s JOIN ecommerce.sales_data.Areas r ON s.region_id = r.region_id GROUP BY r.region_name"
        },
        {
          "id": "csql-5678",
          "title": "ProfitabilityAnalysis_CustomSQL",
          "downstreamDatasources": [
            {
              "name": "Product Returns and Profitability"
            }
          ],
          "question": "SELECT p.product_category, SUM(s.revenue) AS total_profit FROM ecommerce.sales_data.Gross sales s JOIN ecommerce.sales_data.Merchandise p ON s.product_id = p.product_id GROUP BY p.product_category"
        },
        {
          "id": "csql-9101",
          "title": "CustomerSegmentation_CustomSQL",
          "downstreamDatasources": [
            {
              "name": "Customer Segmentation Analysis"
            }
          ],
          "question": "SELECT c.customer_id, c.location, COUNT(o.order_id) AS total_orders FROM ecommerce.sales_data.Prospects c JOIN ecommerce.sales_data.Orders o ON c.customer_id = o.customer_id GROUP BY c.customer_id, c.location"
        },
        {
          "id": "csql-3141",
          "title": "CustomerOrders_CustomSQL",
          "downstreamDatasources": [
            {
              "name": "Customer Orders DataSource"
            }
          ],
          "question": "SELECT o.order_id, o.customer_id, o.order_date, o.sales_amount FROM ecommerce.sales_data.Orders o WHERE o.order_status = 'Accomplished'"
        },
        {
          "id": "csql-3142",
          "title": "CustomerProfiles_CustomSQL",
          "downstreamDatasources": [
            {
              "name": "Customer Orders DataSource"
            }
          ],
          "question": "SELECT c.customer_id, c.customer_name, c.phase, c.location FROM ecommerce.sales_data.Prospects c WHERE c.active_flag = 1"
        },
        {
          "id": "csql-3143",
          "title": "CustomerReturns_CustomSQL",
          "downstreamDatasources": [
            {
              "name": "Customer Orders DataSource"
            }
          ],
          "question": "SELECT r.return_id, r.order_id, r.return_reason FROM ecommerce.sales_data.Returns r"
        }
      ]
    }
  }
}

Similar to earlier than once we had been creating the grasp record of knowledge sources, right here additionally we’ve nested json for the downstream knowledge sources the place we would wish to extract solely the “title” a part of it. Within the “question” column, the complete customized sql is dumped. If we use regex sample, we are able to simply seek for the names of the desk used within the question.

We all know that the desk names all the time come after FROM or a JOIN clause they usually typically comply with the format ... The is elective and many of the occasions not used. There have been some queries I discovered which used this format and I ended up solely getting the database and schema names, and never the entire desk title. As soon as we’ve extracted the names of the info sources and the names of the tables, we have to merge the rows per knowledge supply as there will be a number of customized sql queries utilized in a single knowledge supply.

### Convert the customized sql response into dataframe
col_names = custom_table_query_result['data']['customSQLTablesConnection']['nodes'][0].keys()
cs_df = pd.DataFrame(columns=col_names)

for i in custom_table_query_result['data']['customSQLTablesConnection']['nodes']:
    tmp_dt = {okay:v for okay,v in i.gadgets()}

    cs_df = pd.concat([cs_df, pd.DataFrame.from_dict(tmp_dt, orient='index').T])

# Extract the info supply title the place the customized sql question was used
cs_df['data_source'] = cs_df.downstreamDatasources.apply(lambda x: x[0]['name'] if x and 'title' in x[0] else None)
cs_df.reset_index(inplace=True)
cs_df.drop(['index','downstreamDatasources'], axis=1,inplace=True)

### We have to extract the desk names from the sql question. We all know the desk title comes after FROM or JOIN clause
# Notice that the title of desk will be of the format ..
# Relying on the format of how desk known as, you'll have to modify the regex expression

def extract_tables(sql):
    # Regex to match database.schema.desk or schema.desk, keep away from alias
    sample = r'(?:FROM|JOIN)s+((?:[w+]|w+).(?:[w+]|w+)(?:.(?:[w+]|w+))?)b'
    matches = re.findall(sample, sql, re.IGNORECASE)
    return record(set(matches))  # Distinctive desk names

cs_df['customSQLTables'] = cs_df['query'].apply(extract_tables)
cs_df = cs_df[['data_source','customSQLTables']]

# We have to merge datasources as there will be a number of customized sqls utilized in the identical knowledge supply
cs_df = cs_df.groupby('data_source', as_index=False).agg({
    'customSQLTables': lambda x: record(set(merchandise for sublist in x for merchandise in sublist))  # Flatten & make distinctive
})

print('There are ', cs_df.form[0], 'datasources with customized sqls utilized in it')

After we carry out all of the above operations, that is how the construction of cs_df would appear to be:

Pattern output of code

Processing of normal Tables in Information Sources

Now we have to get the record of all of the common tables utilized in a datasource which aren’t part of customized SQL. There are two methods to go about it. Both use the publishedDatasources object and test for upstreamTables or use DatabaseTable and test for upstreamDatasources. I’ll go by the primary methodology as a result of I would like the outcomes at a knowledge supply degree (principally, I would like some code able to reuse once I wish to test a selected knowledge supply in additional element). Right here once more, for the sake of simplicity, as an alternative of going for pagination, I’m looping via every datasource to make sure I’ve all the pieces. We get the upstreamTables inside the sector object in order that must be cleaned out.

############### Get the info sources with the common desk names utilized in your website

### Its greatest to extract the tables info for each knowledge supply after which merge the outcomes.
# Since we solely get the desk info nested below fields, in case there are a whole bunch of fields 
# utilized in a single knowledge supply, we'll hit the response limits and will be unable to retrieve all the info.

data_source_list = master_df.title.tolist()

col_names = ['name', 'id', 'extractLastUpdateTime', 'fields']
ds_df = pd.DataFrame(columns=col_names)

with server.auth.sign_in(tableau_auth):
    for ds_name in data_source_list:
        question = """ {
            publishedDatasources (filter: { title: """"+ ds_name + """" }) {
            title
            id
            extractLastUpdateTime
            fields {
                title
                upstreamTables {
                    title
                }
            }
            }
        } """
        ds_name_result = server.metadata.question(
        question
        )
        for i in ds_name_result['data']['publishedDatasources']:
            tmp_dt = {okay:v for okay,v in i.gadgets() if okay != 'fields'}
            tmp_dt['fields'] = json.dumps(i['fields'])
        ds_df = pd.concat([ds_df, pd.DataFrame.from_dict(tmp_dt, orient='index').T])

ds_df.reset_index(inplace=True)

That is how the construction of ds_df would look:

Pattern output of code

We will have to flatten out the fields object and extract the sector names in addition to the desk names. For the reason that desk names can be repeating a number of occasions, we must deduplicate to maintain solely the distinctive ones.

# Perform to extract the values of fields and upstream tables in json lists
def extract_values(json_list, key):
    values = []
    for merchandise in json_list:
        values.append(merchandise[key])
    return values

ds_df["fields"] = ds_df["fields"].apply(ast.literal_eval)
ds_df['field_names'] = ds_df.apply(lambda x: extract_values(x['fields'],'title'), axis=1)
ds_df['upstreamTables'] = ds_df.apply(lambda x: extract_values(x['fields'],'upstreamTables'), axis=1)

# Perform to extract the distinctive desk names 
def extract_upstreamTable_values(table_list):
    values = set()a
    for inner_list in table_list:
        for merchandise in inner_list:
            if 'title' in merchandise:
                values.add(merchandise['name'])
    return record(values)

ds_df['upstreamTables'] = ds_df.apply(lambda x: extract_upstreamTable_values(x['upstreamTables']), axis=1)
ds_df.drop(["index","fields"], axis=1, inplace=True)

As soon as we do the above operations, the ultimate construction of ds_df would look one thing like this:

Pattern output of code

We’ve got all of the items and now we simply need to merge them collectively:

###### Be part of all the info collectively
master_data = pd.merge(master_df, ds_df, how="left", on=["name","id"])
master_data = pd.merge(master_data, cs_df, how="left", left_on="title", right_on="data_source")

# Save the outcomes to analyse additional
master_data.to_excel("Tableau Information Sources with Tables.xlsx", index=False)

That is our last master_data:

Pattern Output of code

Desk-level Influence Evaluation

Let’s say there have been some schema adjustments on the “Gross sales” desk and also you wish to know which knowledge sources can be impacted. Then you possibly can merely write a small operate which checks if a desk is current in both of the 2 columns — upstreamTables or customSQLTables like under.

def filter_rows_with_table(df, col1, col2, target_table):
    """
    Filters rows in df the place target_table is a part of any worth in both col1 or col2 (helps partial match).
    Returns full rows (all columns retained).
    """
    return df[
        df.apply(
            lambda row: 
                (isinstance(row[col1], record) and any(target_table in merchandise for merchandise in row[col1])) or
                (isinstance(row[col2], record) and any(target_table in merchandise for merchandise in row[col2])),
            axis=1
        )
    ]
# For example 
filter_rows_with_table(master_data, 'upstreamTables', 'customSQLTables', 'Gross sales')

Under is the output. You possibly can see that 3 knowledge sources can be impacted by this transformation. You can too alert the info supply house owners Alice and Bob prematurely about this to allow them to begin engaged on a repair earlier than one thing breaks on the Tableau dashboards.

Pattern output of code

You possibly can try the entire model of the code in my Github repository right here.

That is simply one of many potential use-cases of the Tableau Metadata API. You can too extract the sector names utilized in customized sql queries and add to the dataset to get a field-level affect evaluation. One may also monitor the stale knowledge sources with the extractLastUpdateTime to see if these have any points or must be archived if they aren’t used any extra. We will additionally use the dashboards object to fetch info at a dashboard degree.

Remaining Ideas

You probably have come this far, kudos. This is only one use case of automating Tableau knowledge administration. It’s time to mirror by yourself work and assume which of these different duties you could possibly automate to make your life simpler. I hope this mini-project served as an fulfilling studying expertise to know the ability of Tableau Metadata API. When you preferred studying this, you may also like one other one among my weblog posts about Tableau, on among the challenges I confronted when coping with huge .

Additionally do try my earlier weblog the place I explored constructing an interactive, database-powered app with Python, Streamlit, and SQLite.


Earlier than you go…

Comply with me so that you don’t miss any new posts I write in future; you can find extra of my articles on my . You can too join with me on LinkedIn or Twitter!



READ ALSO

5 Essential Tweaks That Will Make Your Charts Accessible to Individuals with Visible Impairments

The Function of Luck in Sports activities: Can We Measure It?

Tags: APIDataHealthDowntimemetadataProactiveTableau

Related Posts

The new york public library lxos0bkpcjm unsplash scaled 1.jpg
Artificial Intelligence

5 Essential Tweaks That Will Make Your Charts Accessible to Individuals with Visible Impairments

June 8, 2025
Ric tom e9d3wou pkq unsplash scaled 1.jpg
Artificial Intelligence

The Function of Luck in Sports activities: Can We Measure It?

June 8, 2025
Kees streefkerk j53wlwxdsog unsplash scaled 1.jpg
Artificial Intelligence

Prescriptive Modeling Unpacked: A Full Information to Intervention With Bayesian Modeling.

June 7, 2025
Mahdis mousavi hj5umirng5k unsplash scaled 1.jpg
Artificial Intelligence

How I Automated My Machine Studying Workflow with Simply 10 Strains of Python

June 6, 2025
Heading pic scaled 1.jpg
Artificial Intelligence

Touchdown your First Machine Studying Job: Startup vs Large Tech vs Academia

June 6, 2025
Stocksnap sqy05me36u scaled 1.jpg
Artificial Intelligence

The Journey from Jupyter to Programmer: A Fast-Begin Information

June 5, 2025
Next Post
South Korea.jpg

BitMEX, KuCoin Amongst Exchanges Reportedly Going through Sanctions in S. Korea: Here is Why

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

POPULAR NEWS

0 3.png

College endowments be a part of crypto rush, boosting meme cash like Meme Index

February 10, 2025
Gemini 2.0 Fash Vs Gpt 4o.webp.webp

Gemini 2.0 Flash vs GPT 4o: Which is Higher?

January 19, 2025
1da3lz S3h Cujupuolbtvw.png

Scaling Statistics: Incremental Customary Deviation in SQL with dbt | by Yuval Gorchover | Jan, 2025

January 2, 2025
How To Maintain Data Quality In The Supply Chain Feature.jpg

Find out how to Preserve Knowledge High quality within the Provide Chain

September 8, 2024
0khns0 Djocjfzxyr.jpeg

Constructing Data Graphs with LLM Graph Transformer | by Tomaz Bratanic | Nov, 2024

November 5, 2024

EDITOR'S PICK

1 Kx5gpc1keypqkcoll Hdtw Scaled.jpg

From Resume to Cowl Letter Utilizing AI and LLM, with Python and Streamlit

February 5, 2025
Btcusdt 2025 01 13 10 04 02.png

8 Bitcoin Worth Predictions 2025 From Banks, Funds And Consultants

January 13, 2025
1 Fesfrrh6hoh1mxvxkyjkma.webp.webp

LLaDA: The Diffusion Mannequin That May Redefine Language Era

February 27, 2025
0eyioz46oemghzhxw.jpeg

Learn how to Cope with Time Sequence Outliers | by Vitor Cerqueira | Aug, 2024

September 1, 2024

About Us

Welcome to News AI World, your go-to source for the latest in artificial intelligence news and developments. Our mission is to deliver comprehensive and insightful coverage of the rapidly evolving AI landscape, keeping you informed about breakthroughs, trends, and the transformative impact of AI technologies across industries.

Categories

  • Artificial Intelligence
  • ChatGPT
  • Crypto Coins
  • Data Science
  • Machine Learning

Recent Posts

  • Morocco Arrests Mastermind Behind Current French Crypto-Associated Kidnappings
  • Cornelis Launches CN5000: AI and HPC Scale-out Community
  • 5 Essential Tweaks That Will Make Your Charts Accessible to Individuals with Visible Impairments
  • Home
  • About Us
  • Contact Us
  • Disclaimer
  • Privacy Policy

© 2024 Newsaiworld.com. All rights reserved.

No Result
View All Result
  • Home
  • Artificial Intelligence
  • ChatGPT
  • Data Science
  • Machine Learning
  • Crypto Coins
  • Contact Us

© 2024 Newsaiworld.com. All rights reserved.

Are you sure want to unlock this post?
Unlock left : 0
Are you sure want to cancel subscription?