Wednesday, November 1, 2017

Replication error...yes...transaction replication is alive and well!

Oh boy this was a fun one! So where I work now we have transactional replication set up between an OLTP and DataWarehouse database, in this case spread across two instances but in many cases on the same instance.

I get an urgent call in the middle of the night that client X can't run a report, it's returning 0's instead of information. Yes, this call came at 11:40 PM, yes it has to be fixed right now. *sigh*. OK, it IS what I signed up for. Anyways! Now I've seen this error before when replication is down, so easy fix...let's go look at the Replication Monitor. Hmm...it's up...all green. Let's reinitialize it from a fresh snapshot, shouldn't take too long.

Everything looks green in the Repl monitor.

Re-run the report.

Same thing. What?

Let's dig deeper. 

I didn't see anything in the monitor:




(dummy screenshot from http://www.sqlservercentral.com/articles/Stairway+Series/72451/ to protect the innocent)

I needed to dig in deeper to see the error; it wasn't until I got into the LogReader Agent Properties that I found the issue (open your server in Replication Monitor, go to your publication, right-click on LogReader Agent and then to Properties). Here is what I see on the error details:







What IS this?

So I dug around online to see what I could find to figure out exactly what the issue is. This is what I came up with:

USE distribution
GO

DECLARE @SeqNo varbinary(16),
  @SeqNoChar nchar(30);

SET @SeqNo = 0x000D12E200003C74000700000000;
SET @SeqNoChar = '0x000D12E200003C74000700000000';

SELECT *
FROM dbo.MSarticles
WHERE article_id IN (SELECT article_id
  FROM MSrepl_commands
  WHERE xact_seqno = @SeqNo)

EXEC sp_browsereplcmds
@xact_seqno_start = @SeqNoChar,
@xact_seqno_end = @SeqNoChar

What?!? Well, take the Transaction Sequence # from the error and plug it into the script, this will show you what wasn't able to be transferred over. At this point you can do the following (from https://support.microsoft.com/en-us/help/3066750/how-to-troubleshoot-error-20598-the-row-was-not-found-at-the-subscribe):


  • Manually insert the missing row at the subscriber. This may enable the Distribution Agent to retry the failed command and move forward with the replication. 

    Note There may be other rows that are missing and that have to be manually inserted at the subscriber if there are more failures.
  • Instruct the Distribution Agent to skip this error and continue to replicate the rest of the changes. The Distribution Agent accepts the skiperrors parameter. You can use this parameter to pass error code 20598. This may keep the replication setup intact while you wait for an opportunity to manually synchronize the missing rows. 
As it was very late (early?) at this point, the error had stopped replication for over 3 days (and there was no guarantee that inserting the record would allow replication to catch up), and they needed their reports run (*sigh*) replication was rebuilt. From scratch. Dropped all publishers and subscribers, dropped the Distribution DB, and set it up fresh. All is well now, although I missed out on some much-needed beauty sleep!

Friday, July 7, 2017

How to REALLY solve DbProviderFactories section can only appear once per config file error


Had this issue after installing SQL 2017, then installing SQL 2012 side-by-side and trying to create an SSIS package to load data from flat files to SQL. No idea if the dual installation caused it but it definitely caused many hours of frustration!

Many places on the web will have you only going to ONE folder to fix the machine.config file, took me much digging to find out there are two folders to check!

C:\Windows\Microsoft.NET\Framework\v4.0.30319\Config
C:\Windows\Microsoft.NET\Framework64\v4.0.30319\Config

So here's how to do it:
First, go to one of the two folders listed above and you'll see the different versions. First go to the folder v4.0.30319 and open the CONFIG folder > Machine.Config file in a text editor (NOT WORD, use Notepad or something similar....oh hell, just use Notepad to be safe). In that file, search for the string “<DbProviderFactories/>”. If you found that, you'll see something similar to this (please note that this can be an issue with ANY provider, I'm showing DB2 here):




<system.data>
    <DbProviderFactories>
        <add name="IBM DB2 for i5/OS .NET Provider" invariant="IBM.Data.DB2.iSeries" description=".NET Framework Data Provider for i5/OS" type="IBM.Data.DB2.iSeries.iDB2Factory, IBM.Data.DB2.iSeries, Version=12.0.0.0, Culture=neutral, PublicKeyToken=9cdb2ebfb1f93a26"/>
        <add name="Microsoft SQL Server Compact Data Provider" invariant="System.Data.SqlServerCe.3.5" description=".NET Framework Data Provider for Microsoft SQL Server Compact" type="System.Data.SqlServerCe.SqlCeProviderFactory, System.Data.SqlServerCe, Version=3.5.0.0, Culture=neutral, PublicKeyToken=89845dcd8080cc91"/>
    </DbProviderFactories>
    <DbProviderFactories/>
</system.data>


The <DbProviderFactories/> doesn’t belong there so delete this line:

<DbProviderFactories/>

Save it and look in the other folders of the other versions for that line. It’s possible that this line also appears in v2.0.50727. Don't forget to check both folders (Framework and Framework64)!

Tuesday, November 1, 2016

Change collation of a column...wait, make that 808 columns over 200+ tables....

As I patiently wait for servers to restart I find myself thinking about blogging. I am terrible at keeping mine updated, and am trying to think of a way to inspire myself to blog more often. I mean, it's not like I don't have an entire Evernote full of things I can write about...maybe if I bribe myself with cookies or something...

Anyways, on to what you came to read about.

So the other day I got a ticket. "Hey there are columns in some of these tables in this database that might have a different collation, can you make them all the same?"

Wha....? I don't even want to know. <smh>

Ok, I can do this. So I run this to see how bad this really is (change out the collation throughout as required):

SELECT
c.name as column_name,
o.name as tbl_name,
o.type_desc,
c.collation_name
FROM sys.columns c
join sys.objects o
on c.object_id = o.object_id
where o.type = 'U'
and collation_name <> 'SQL_Latin1_General_CP437_CI_AI'

EEEK! 808! Well crud. So knowing that you have to drop constraints and such before altering a table, I decided to be lazy and see if anyone else has done this before (I mean, manually scripting out foreign keys, indexes, computed columns etc for each table would be a nightmare!). So I found this blog post with scripts to at least get an idea of what we're looking at having to do:

https://www.mssqltips.com/sqlservertip/3221/how-to-change-the-collation-of-a-sql-server-column/

Basically these are scripts to check for objects that need to be dropped. Well that's just great. But then what do I do with this knowledge? I STILL have to drop then re-add all of this stuff.

Have you guessed by now? The answer is POWERSHELL!!

I found that luckily all I had were indexes to contend with, so I threw together a script to grab all of these items. Because Powershell is mega-cool, I'm sure you can modify the script to grab most of the other constraints. Anything that you can get from a SQL query you can feed to Powershell. So on to the next step.

I ran this to get a list of the tables I needed to modify:

Run this to get a list of tables:

DECLARE @collate SYSNAME
SELECT @collate = 'SQL_Latin1_General_CP437_CI_AI'

SELECT distinct
  SCHEMA_NAME(o.[schema_id])+ '.' +  + o.name + ','

FROM sys.columns c
    JOIN sys.objects o  ON c.[object_id] = o.[object_id]
    JOIN sys.types t ON c.system_type_id = t.system_type_id AND c.user_type_id = t.user_type_id
WHERE t.name IN ('char', 'varchar', 'text', 'nvarchar', 'ntext', 'nchar')
   AND c.collation_name != @collate
    AND o.[type] = 'U'


I put this in a file at C:\tables.csv. Yes I could have done ALL of this in Powershell, but I find it easier to sometimes break things up into chunks. Besides being easier for others to understand, it's easier for ME to understand when I look at what I did a year ago and go huh??

Onward!

In a new query window I run this on the database to get the # of indexes, I use it later to be sure all have been dropped/recreated:

SELECT
            so.name AS TableName
            , si.name AS IndexName
            , si.type_desc AS IndexType
FROM
            sys.indexes si
            JOIN sys.objects so ON si.[object_id] = so.[object_id]
WHERE
            so.type = 'U'    --Only get indexes for User Created Tables
            AND si.name IS NOT NULL
ORDER BY
            so.name, si.type 




Then in another window (yes we'll have a few) I run this to get the Alter Table to change the collation (can't remember where I grabbed this but thank you Internet!):





DECLARE @collate SYSNAME
SELECT @collate = 'SQL_Latin1_General_CP437_CI_AI'

SELECT 
  '[' + SCHEMA_NAME(o.[schema_id]) + '].[' + o.name + '] -> ' + c.name
, 'ALTER TABLE [' + SCHEMA_NAME(o.[schema_id]) + '].[' + o.name + ']
    ALTER COLUMN [' + c.name + '] ' +
    UPPER(t.name) + 
    CASE WHEN t.name NOT IN ('ntext', 'text') 
        THEN '(' + 
            CASE 
                WHEN t.name IN ('nchar', 'nvarchar') AND c.max_length != -1 
                    THEN CAST(c.max_length / 2 AS VARCHAR(10))
                WHEN t.name IN ('char', 'varchar') AND c.max_length != -1 
                    THEN CAST(c.max_length AS VARCHAR(10))
                WHEN t.name IN ('nchar', 'nvarchar', 'char', 'varchar') AND c.max_length = -1 
                    THEN 'MAX'
                ELSE CAST(c.max_length AS VARCHAR(10)) 
            END + ')' 
        ELSE '' 
    END + ' COLLATE ' + @collate + 
    CASE WHEN c.is_nullable = 1 
        THEN ' NULL;'
        ELSE ' NOT NULL;'
    END
FROM sys.columns c 
    JOIN sys.objects o  ON c.[object_id] = o.[object_id]
    JOIN sys.types t ON c.system_type_id = t.system_type_id AND c.user_type_id = t.user_type_id
WHERE t.name IN ('char', 'varchar', 'text', 'nvarchar', 'ntext', 'nchar')
   AND c.collation_name != @collate
    AND o.[type] = 'U'


Save the output to run later.

Now run this, we're getting our drop and create statements, putting them in a folder to run later, be sure to change the variables at the top to what you want else it obviously won't run:

#Scripts out objects with a drop and create script

[System.Reflection.Assembly]::LoadWithPartialName( 'Microsoft.SqlServer.SMO') | out-null
[System.Reflection.Assembly]::LoadWithPartialName('Microsoft.SqlServer.SMOExtended') | out-null

$servername = ''
$databasename = ''
$OutputPath = "C:\"


    $SQLServer = New-Object ('Microsoft.SqlServer.Management.Smo.Server') $ServerName
    #set script options for the drop statement
    $ScriptDropExisting = new-object ("Microsoft.SqlServer.Management.Smo.Scripter") $SQLServer
    $ScriptDropExisting.Options.ScriptDrops = $True
    $ScriptDropExisting.Options.IncludeIfNotExists = $True
    $ScriptDropExisting.Options.ToFileOnly = $True
    #set script options for the create statement
    $ScriptCreate = new-object ("Microsoft.SqlServer.Management.Smo.Scripter") $SQLServer
    $ScriptCreate.Options.IncludeIfNotExists = $True
    $ScriptCreate.Options.ToFileOnly = $True
    #$ScriptCreate.Options.AppendToFile = $True
    # get the database object
    $Database = $SQLServer.Databases.Item($DatabaseName)
    #add the database to the output path
    $ScriptPath  = $OutputPath + [string]$Database.Name + "\"
    #get our tables
    $tablist = 'C:\tables.csv'
    $Tables = @()
    $Tables = Import-Csv $tablist -Header 'some'
   
    #loop through each table in the database
    ForEach($thing in $Tables)
    {write-host $thing.some
    $thing  = $thing.some

        #get the name separated from schema
        $TableName = $thing.Substring(($thing.IndexOf("."))+1)
        #replace the square brackets
        $TableName = $TableName.Replace('[', '')
        $TableName = $TableName.Replace(']', '')
        #get schema name
        $SchemaName = $thing.Substring(0, $thing.IndexOf("."))
        $SchemaName = $SChemaName.Replace('[','')
        $SChemaName = $SchemaName.Replace(']','')
        #Set the table to be the smo object for that table
        $Table = $SQLServer.Databases[$DatabaseName].Tables[$TableName, $SchemaName]
        #get the indexes on the table
        $Objects = $Table.Indexes
        if($Objects -ne $null)
        {

            #does the path exist?
            if ((test-path $ScriptPath) -eq $false)
            {
                New-item -path $ScriptPath -type directory | out-null
            }
            #loop through each index
            ForEach($Object in $Objects)
            {
                #set the output script file name based on the string we created earlier plus the name of the index we're scripting
                $ScriptFileName = $ScriptPath + "\_" + [string]$Object.Name.replace(":", "_") + "_drop.sql"
                $ScriptFileName = $ScriptFileName.Replace("<", "")
                $ScriptFileName1 = $ScriptPath + "\" + [string]$Object.Name.replace(":", "_") + "_add.sql"
                $ScriptFileName1 = $ScriptFileName1.Replace("<", "")

                #set the filename in the scripter object
                $ScriptDropExisting.Options.Filename = $ScriptFileName
                $ScriptCreate.Options.Filename = $ScriptFileName1
                #Comfort message
                #$Message = [string](get-date) + ": Generating index script for index " + [string]$Object.Name
                #write-host $Message
                #call the two scripters
                $ScriptDropExisting.Script($Object)
                $ScriptCreate.Script($Object)
            }
        }
        else
        {
            write-host "No indexes exist for table :" + [string]$thing " check your list"
        }
    }
    





Next (yes I promise we're almost done) we drop the constraints using the SQL we just scripted. Run this, and yes change the parameters at the top to where you're running this:

$TargetDBSQLFILE = Get-Childitem -Path C:\<script path> -Filter *_drop.sql
$servername = ''
$databasename = ''

foreach ($one in $TargetDBSQLFILE )
{#write-host $one.fullname
$file = $one.fullname

                   # $TargetDBSQLFILE.fullname
                    echo ""
                    osql -S $ServerName -d $DatabaseName -E -i $file |Out-Null
                    write-host "done with $file"
                    write-host ''}

                    write-host 'done all together'




Notice the _drop in the filter. This will drop the objects. We will run this again but then be creating the objects.

Run the Alter statements you got from above SQL script to alter the columns to the correct collation.


Run this again be sure to change drop to add. You may also want to change your DB to SIMPLE as it will bloat the log (be sure to change it back afterwards), and obviously set your parameters at the top:


$TargetDBSQLFILE = Get-Childitem -Path C:\<script path> -Filter *_add.sql
$servername = ''
$databasename = ''

foreach ($one in $TargetDBSQLFILE )
{
$file = $one.fullname


                    echo ""
                    osql -S $ServerName -d $DatabaseName -E -i $file |Out-Null
                    write-host "done with $file"
                    write-host ''}

                    write-host 'done all together'


Now check your work:

--Run this to get a list of tables with the bad collation:

DECLARE @collate SYSNAME
SELECT @collate = 'SQL_Latin1_General_CP437_CI_AI'

SELECT distinct
  SCHEMA_NAME(o.[schema_id])+ '.' +  + o.name + ','

FROM sys.columns c 
    JOIN sys.objects o  ON c.[object_id] = o.[object_id]
    JOIN sys.types t ON c.system_type_id = t.system_type_id AND c.user_type_id = t.user_type_id
WHERE t.name IN ('char', 'varchar', 'text', 'nvarchar', 'ntext', 'nchar')
   AND c.collation_name != @collate
    AND o.[type] = 'U'






--run this to get the # of indexes, make sure the number matches the one from above:

SELECT
            so.name AS TableName
            , si.name AS IndexName
            , si.type_desc AS IndexType
FROM
            sys.indexes si
            JOIN sys.objects so ON si.[object_id] = so.[object_id]
WHERE
            so.type = 'U'    --Only get indexes for User Created Tables
            AND si.name IS NOT NULL
ORDER BY
            so.name, si.type 


--Last check:

SELECT 
c.name as column_name, 
o.name as tbl_name,
o.type_desc,
c.collation_name
FROM sys.columns c
join sys.objects o
on c.object_id = o.object_id
where o.type = 'U'
and collation_name <> 'SQL_Latin1_General_CP437_CI_AI'


So I took a ticket that would have taken me HOURS to do and did it in just a few. Hope this helps someone else out!

Friday, May 27, 2016

Move database from one AG to another AG - SQL 2014

Ok this seems like it'd be simple enough to do. For me, I already had one AG set up on a server. The natives got restless, and didn't want to play nice, and didn't want one groups' failover to affect the other group. I needed to separate the databases into two different AGs.

Seems totally straight-forward and simple enough.

So I set up the new AGs, set up the new listeners, no errors, life is good! I take one of the databases out of the old AG, and add it to the second, choosing Join Only as the database was already synchronized.

You'd think that's all there is to it.

But all I'm doing is watching the 'Joining <database> to availability group....' message sit there, with the little spinny thing going showing that it's really really trying to do what I ask, then it fails. Huh.

So I look in the error logs.

Nothing. At least nothing useful:

AlwaysOn Availability Groups connection with secondary database terminated for primary database '<database>' on the availability replica '<server>' with Replica ID: {d4811fa0-b284-4730-9879-95cdcca01d82}. This is an informational message only. No user action is required.


What the what? Whaddya mean no user action required? If you did what you were supposed to then maybe so, but this looks like a failure to me.

So then I look on the secondary server.

Wait...what? Where is my AG? I don't see it listed!!

Back to the primary. I open up the AG I was trying to add the database to, looking at everything, when I notice it's showing a little down red arrow on the secondary server.

This day is not going well. The server is up, I was able to connect when making the AG, so what in seven hells is going on here? I right-click the server and click JOIN.

BINGO!

After refreshing the view, red arrow gone, and I'm now able to add my database to the new group. I move the other databases from the old AG to the new, and everything is sunshine and kitties! Maybe some puppies too if they behave and don't chase the kitties.

So long story short, be aware that when creating the AG you should check that all parts are red-down-arrow free before attempting to add databases.

Tuesday, May 17, 2016

How To Query Oracle Sys Views from MSSQL

So I'm now working in Oracle, but yet still working in SQL Server. Fun times (no sarcasm, I'm loving it)! But I ran into an issue today when trying to get information about Data Guard from Oracle into my SQL Server so I can report on latency.


I have the linked server set up (make sure you have the correct driver on your SQL box, edit the TNSNAMES.ORA file, then create the linked server, tons of info out there on how to do that). What I was having an issue with was getting the V$ views returned from a query without an error.


For instance, this throws an error:


SELECT * FROM &lt;your linked server&gt;..SYS.V$DATAFILE


You'll get:

Msg 7314, Level 16, State 1, Line 1
The OLE DB provider "OraOLEDB.Oracle" for linked server "&lt;your linked server&gt;" does not contain the table ""SYS"."V$DATAFILE"". The table either does not exist or the current user does not have permissions on that table.

While this syntax works:

Select * from openquery (&lt;your linked server&gt;,'SELECT * FROM V$DATAFILE')

WOOHOO!

And obviously linked server names have been replaced.

Don't bother doing a Google search; I found a lot of nothing! Hence my post. Hope this helps!

Friday, February 26, 2016

It's been a while...and my MSDB has gotten fat!

New job, new house, new year...lots of changes within the last 12 months. What hasn't changed is my love of working with SQL Server. And my annoyance with certain things.

Latest: The MSDB on a server has grown immense. What the heck is going on? So I took a look, it turns out that the MSDB backup job history has never been cleaned up.  Due to the high volume of databases on this particular server combined with tlog backups every 15 minutes it bloated the MSDB database...like ridiculously, 15 GB.

Now there's a fairly easy fix for this, you could use sp_delete_backuphistory to help clean up the MSDB database. One thing though, I would recommend scheduling it to occur after hours because with an MSDB of this size it may take a while and could potentially cause performance issues.  And increasing the size of the drive would only be a temporary solution, the real problem is that the MSDB backup history is not being cleaned up.

Why isn't there a default cleanup job in SQL set for, say, every 45 days? We clean up Agent error log files, why not the MSDB history?

Schedule this cleanup to delete entries older than a certain (rolling) date and you've got yourself a permanent fix.

Tuesday, September 29, 2015

What do you mean my query is broken? Search isn't working? The Fulltext Phantom.

I have a love/hate relationship with 'problems'. I really like troubleshooting, because I learn so much. Sometimes though it can be really frustrating. Luckily (and I'm sure it was mostly luck that something tickled the back of my memory on this one) I was able to quickly resolve this issue.

Ticket:

I can search and find articles with up to 2 words but anything 3 words and beyond fails to return QA results. 

What? OK, first thing is to ask lots of questions, but yes, the data they wanted was there in all environments, and the query should have returned results, and it WAS returning results, but only in Test, not QA.

Here is the query (First the base one, then a modified one, neither worked in QA):


SELECT *
  FROM [MyDB].[this].[table]

  where value like 'diana arriving at the studio%'




******second query*******


USE [MyDB]
GO

DECLARE       @return_value int,
              @RowCount int

EXEC   @return_value = [this].[table]
              @SearchText = N'FORMSOF (INFLECTIONAL,"diana") AND FORMSOF (INFLECTIONAL,"arriving") AND FORMSOF (INFLECTIONAL,"at") AND FORMSOF (INFLECTIONAL,"the") AND FORMSOF (INFLECTIONAL,"studio")',
              @Start = 0,
              @End = 100,
              @RowCount = @RowCount OUTPUT

SELECT @RowCount as N'@RowCount'

SELECT 'Return Value' = @return_value


GO

So one of my first questions is: what's different. So I look through some of the basic configurations and data, and yep all of the data, views, access etc are there. Then I stopped to think. They're doing a search. What's one of the best ways to make searches like this faster? Using Fulltext! So I went to look at the settings for Fulltext on all of the servers (prod, QA, test, etc) and found a difference. Looks like they had Transform Noise Words OFF on the offending servers. So a quick turn on:

sp_configure 'show advanced options', 1;
RECONFIGURE;
GO
sp_configure 'transform noise words', 1;
RECONFIGURE;

GO

And the query is returning results as it should. Kind of a sneaky setting, and if you haven't had exposure to Fulltext you might not think of it. So here's your reminder!