An Example Tableau Security Model

My experience navigating Tableau security as a novice…

I recently upgraded a Tableau 10.1 estate to Tableau 2018.1.  I used the opportunity to completely rework the security from the ground up.

When starting out, much of the guidance I found on the net was focused on the many individual components that make up a Tableau estate.

While I’m certainly not claiming what I have done is best practice, I hope it will trigger some ideas and serve as a starting point for your own implementations.

This guide doesn’t cover licensing although that is something which is definitely worth understanding if you are implementing a Tableau security model.  If you need to learn about Tableau licensing then this article does a great job of explaining both the old and new models.

Continue reading

SSRS Standards

When I joined my current employer, they had just started an informal project with 6 BI developers to deliver a host of new reports against their new data warehouse using SSRS 2016 Standard.

These are the standards I put together to try and bring some consistency to both the reports and the environments.

Like with any standards, my goal was to address the main areas whilst not being too onerous by concerning ourselves with minute details.  It was also to provide a steer on items which end up being arbitrary decisions such as the names of data-sources.

Hopefully it can act as a starting point for your own organisations.  I’ll look to cover the rationale behind the access model in another post.

The document, which also contains details on how to setup the project configurations can be found here.

SSRS Standards

Related Links:

How to enable Remote Errors in SSRS

How to set retention period of the execution log

Default SQL Server Settings

Introduction

You can change server-wide default settings by using facets and properties at server level or by modifying the model system database (which acts as a template for new databases created).

I don’t change much but here are the settings I do change to make life easier for myself.

Note: I fully understand that everything needs to be considered on a case by case basis – I’m just presenting these as possible ideas.


#1 – Backup locations – (In SSMS object explorer, right-click the server >> Facets >> Backup Directory)

When configuring my backup procedures, I like to setup two folders on each database server such as the below;

E:\SQL_Backups_Nightly\ | Used with nightly backup maintenance job.
E:\SQL_Backups_Adhoc\   | Used for manual backups/restores.

I then setup the latter as my default backup directory in the server facets pane.

This means that when I do manual backups/restores the dialog box will take me straight to this folder.  I find that when restoring a database under pressure, trying to navigate to the default paths (which usually look like below) just adds unnecessary confusion.

\\Servername\e$\Program Files\Microsoft SQL Server\MSSQL10_50.MSSQLSERVER\MSSQL\Backup

#2 – File Locations – (In SSMS object explorer, right-click the server >> Properties)

This is similar to the backup locations above.  In Database Settings under Database Default Location I use the following values.

E:\SQL_Server_Data\
F:\SQL_Server_Logs\

Again, they are just easier to find and if you ever have to write MOVE statements and such like, this will make it a lot easier.

For the record, it’s a good idea to have your log files on a separate physical disk to your data files, please see this article for a full explanation.

Continue reading

Creating an ‘SSIS Configurations’ report

When starting my current role, I didn’t get the opportunity to have a handover period from the departing developer.  The company had a lot of SSIS packages and I needed to be able to look at safely testing them without affecting production file shares etc.

In the end I asked a consultant who was working with me (and who knew C#) to write something to help me extract the .dtsconfig data into a spreadsheet.

Once I had the data in Excel, I imported it into SQL and built the below SSRS report so I could see side by side comparisons for each of the packages…

SSIS_ConfigReport

I have made the tool available on my mega account if you want to give it a go yourself.  Instructions also provided in the folder.

dtsConfig-tool (choose download as zip).

Thanks to Steve Cage who wrote the extraction tool and let me post it online 🙂

A better way to deliver data-dumps

A common task when working as a data-developer is to produce one-off data extracts for various users, especially if you are not working within a self-service model.

This is fine but can lead to a really frustrating situation whereby you get a request such as “please can you re-run this data extract and I need it within the next hour’.

Usually the only information the requestor can give you is which developer ran it for them last time (and they have usually left the company).  You then have to try and reverse-engineer the output and hope for the best.

To get around this, I use the following excel template which contains the output as well as a tab to record the sql used, the server it was run against and the date it was produced – this allows for a quick and easy ‘refresh’ of the data and also means that the data is traceable to source.

data_extract_template.xlsx

Note:  I also have another version of this template with a number of macros that have keyboard shortcuts to quickly apply formatting options such as UK dates but I cannot upload .xlsm to wordpress.  However this is very easy to build yourselves.

Kimball vs Inmon

“We acknowledge that organizations have successfully constructed DW/BI based on the approaches advocated by others… Rather than encouraging more consternation over our philosophical differences, the industry would be far better off devoting energy to ensure that our DW/BI deliverables are broadly accepted by the business to make better, more informed decisions.”

– Ralph Kimball

We see this a lot; teams spend all their time in dead-lock deciding on which approach, project-methodology and architecture to choose and then just go and over-engineer something that is of little use to the business anyway.

It is a better approach to keep things simple and deliver early ensuring the solution is consistent, accurate and maintainable.  There is no point in making sure your data-mart can scale to 1TB in size if no-one is going to use it.