The Multifunctioning DBA


December 10, 2008  10:26 PM

Sybase Training Day 4

Colin Smith Colin Smith Profile: Colin Smith

Day 4 of training was really all about backup and restore. This was a good class and I learned a lot about why my company does some of the things that they do. I knew how to do a Database dump and a Database load, but I did learn a lot and found a couple things to be very interesting.

First, I did learn that it is easy to backup all of the DDL for everything in the database and that it is a good idea to do so. This way if disaster strikes you can quickly get everything back up and running. Even if you have backups of the databases you might not have the ability to quickly get all of you disks re inited and done in the correct order. If you have to rebuild master then you have to get all the devices bult with the original vdevno #. I do not know about most of you but I do not specify the Vdevno # when I do a disk init. Sybase automatically finds the lowest available vdevno # and assigns it to the device. If you do not have the original DDL for this then you may have a hard time getting this correct. So you can use Sybase Central or other third party tools to extract the ddl of objects, but a cooler faster way is to use ddlgen on the server host and output this to a file or set of files that you will keep someplace safe just in case.

Second, I learned that compression is pretty awesome. You can specify the level of compression that you want to use when backing up a database. The levels aare 0-9 and 9 is the most compressed. We found though, that you do not really gain much after the first level of compression. The first level will dramatically reduce the backup file size. Anything after that may not be worth the extra work that has to be done by the server. The difference between the first level of compression and the last is not that much.

Third is that backups are very light on the actual database server. Since backups use the backup server to do the backup you can do the backup on-line and users will most likely not even know that you are doing a backup.

Fourth is that you should do a dbcc check of the database to make sure that you do not have any sort of problem before doing the backup. Do this as much as you can anyway. This will help prevent a problem from going unnoticed for a long time, if this happens then you might not have a backup without the error and that can get you into a world of hurt.

And of course we talked about how important it is to do regular backups of your databases. If you want to be able to recover to the most recent point in time then you will want to do transaction log dumps as well. Just do them often and try to check your dbs using dbcc’s as often as possible as well.

 

Once again if you have any questions or need anything just let me know.

December 10, 2008  10:12 PM

Sybase 15 Upgrade Project

Colin Smith Colin Smith Profile: Colin Smith

My company is in the process of testing Sybase 15 with one of our homegrown apps. Since I am the new guy on the DBA team this project got dumped on me. This is OK by me since I think it is a great opportunity to learn some new things and I have. Anyway, I went through and found out everything that I needed to know in order to get the DB server installed and get all the disk devices setup and created the DBS for load. Once I did that I loaded the Databases and watched as ASE upgraded the Databases from 12.5 to 15. Everything seemed to go great. The only problem that I really ran into here was that I had to drop all encryption and set that up again. Other then that everything went great. I handed the server over to the developers and said it is all yours start testing. So they did just that. A couple weeks into the testing, with no major issue, I get an email saying that they have a stored proc that runs in a few minutes on 12.5 and never completes in 15.0. I and my team looked into this and the developers did as well. We could not really figure out what was going on. We say that the optimizer was doing things differently in 15 then in 12.5 but were not sure why. We opened a case with Sybase and after a couple weeks of communications and sending them data the level one support person agreed that we had a problem, he could replicate the issue, and was flying this up to engineering. A week later we finally hear back and they say that engineering can not duplicate the issue and that we do not have a issue. Let me assure you that we still had an issue. By this time I had gone to my training class and I had mentioned this to the instructor. He kinda laughed about it but I am sure that he would have helped out more if I asked. Instructor at class was very good and very knowledgeable. Anyway a day or two later I get an email from one of our contract developers saying that he has a solution. WOW this is great news and the solution is actually pretty simple. This should work for anyone having a similar issue. I am not sure that this is always the best solution but all he did was force the 12.5 plan on the 15.0 optimizer. I mentioned this to the instructor and he agreed that in this case this was a good solution. So anyone having a similar issue this should get you running until Sybase can help you find another solution if in fact another solution exists.

 

Thanks and please let me know if you have any questions.


December 10, 2008  12:16 AM

Sybase Training Day 3

Colin Smith Colin Smith Profile: Colin Smith

In day 3 we covered another 5 sections so once again I am just going to go over what I found to be the most interesting.

I would say that I have two things in day 3 that I found to be very cool. First is permissions and how they work in Sybase. So first you have to have a login to the server. Once that is created you should be able to log on to the server but that does not mean that you can do anything yet. Next you have to have a user set up in the database or databases that you would like to work in. For instance if you wanted to log in and work in the Pubs2 Database you have to have a login to the server but also a user in the pubs2 database. Now that you have that you can say ‘use pubs2′ and get into the database. This still does not mean that you can do anything in the pubs2 database. now we get to object level security and your user has to have permisions to objects in the database in order to view and or manipulate thos objects. I love this layered approach to security and think that it is a great way do to things. This gives the DBA so many ways to accomplish what is needed.

The other thing on day 3 that I found to be just great was the bulk copy utility. This tool gives the DBA the ability to copy data into or out of a table. This means that I can copy all the contents of a table into a file and then using that same file I can copy that data into a different table if I need to. The one draw back to this is that BCP is a minimally logged event so you will need to enable the db option and then after you are done with the bcp operation you will want to run a full database dump because this operation will invalidate your transaction logs since Sybase does not fully log this type of event.

That is a very broad overview of day 3.  If you have any questions about anything more specific please let me know.


December 5, 2008  10:03 PM

Sybase Training Day 2

Colin Smith Colin Smith Profile: Colin Smith

Sybase ASE Database Administration training class day 2 was a busy day and we again covered 5 chapters. We covered Database Devices, Creating Databases, Disk resource strategies, Auto Expansion, and finally the big one TempDB. I am not going to go into too much detail about these as I think you should take the class. Anyway lets star with Devices.

 

Database Devices are essentially files or chunks of disk in a raw format that ASE will use to store Databases and Logs. This is a pretty basic concept and the biggest thing that came out of this section are the direction and DSYNC db options. If you want to guarantee that your disk writes, in unix and linux, actually make it out to disk then you must turn one of these on. DirectIO is more for Linux and you will most likely see a performance increase in Linux when using DirectIO and not DSYNC.

Creating Databases was next and this is also very simple. This creates the database and the log on the devices that are now set up. This essentially creates a database with no data in it and puts all the needed entries in all the needed system tables for this database.

Disk strategies is also not to difficult. Most of the documentation talks about separating log and data on different physical devices but this is not always easy to do since in most cases you are storing the devices on a RAID or a SAN. Just make sure that you separate log onto a device or set of devices and put data on a separate device or set of devices. Just make sure that you do not mix data and log for any of your databases. This is a bad idea.

Auto Expansion was the next thing on the list and this is a pretty cool feature but I think it requires more work in set up then it is really worth. You can set up thresholds and threshold actions that will see that the space is running low and the system will grow your file or raw device, if possible, and then alter the database on that device. I am of the philosophy that I, as the DBA, want to be in control of my disk and now when it has grown. I do look for thresholds and I am notified of them. At that time I can decide if I need to add a new device. I do not grow my devices they are all the same size. If I need more space then I have to request the device from our Unix admin and then I can init the device and alter the database on that new device.

Finally we talked about TempDB. This is a broad topic and I am only going to mention one thing that I got out of the class that I think will be very useful. Basically we talked about setting up additional TempDB’s and binding users to them. I can see that this may be useful to bind the DBA accounts, SA, to a separate TempDB where users can not touch it. I think that this could be very useful in Test and Dev environments where some process may eat up all of TempDB. This will ensure that the DBA can still log in to the Database and do some things to prevent having to shut the server down to clear this out.

 

I will continue to post about this training class over the next few days.


December 2, 2008  3:10 AM

Sybase Training Day 1

Colin Smith Colin Smith Profile: Colin Smith

Today was my first day of class and it was a good day of class. I arrived at the Sybase Training facility this morning, found the class and saw the student list and only three students are registered for the class. Only two of us showed up though. I found this ironic because the person that did not show up was a Sybase employee. Anyway we started class with an overview of what to expect from the class and then we got into some detail.

Module 1 was all about installing a Sybase Server. This is pretty basic stuff. Really you just need to download the software from Sybase for the OS that you are using. If you are using a Unix OS, which I am, then you can open up an xterm session connect to the server and run setup. This will launch the Graphic installer and you are on your way. You will need to answer three questions to get all of the binarys and associated files installed into the $SYBASE directory. After the install is complete you can continue with this wizard to set up the server instance or you can go back later and run srvbuild. This is a better way to do this if you ask me. This way you have one screen to fill out instead of 3 or 4. So that pretty much covers setup.

Module 2 was all about connecting to the DB. ASE supports many connection methods but they also have a preferred way. This is called open server open client. In order for this to work you must have an interfaces file in the $SYBASE directory. This file holds information about the servers. It will have the server name as well as connection info such as protocol and port that the server is listening on. Once you have this set up, this is created during setup, you will be able to use isql to connect to the server once it is running. Now if you would like to connect from a Windows machine then you will need a sql.ini file with the same information in it. The format is a bit different but it has all the same information in it. That is pretty straight forward.

Module 3 was about how to configure the server. In this module we did not talk about what the configuration values should be just how to see them and change them. We really only discussed how to change the config file. Really you have two choices on how to do this. You can edit the file using VI or any other editor or you can log into the database as a System Administrator and use sp_configure to access and make changes to the file. This is the preferred method as about 75% of all configuration options are dynamic and do not require a reboot of the server to take effect. This will also modify the current active config file and make a backup just in case it causes any issues. Nothing much more to this.

 

Module 4 we got into configuring memory and this is a huge deal. Basically you want to give the data cache as much memory as you possibly can. After you give the OS and any other application on the server the memory that they need make the max memory of the server the rest of the available memory on the server. Then configure all options that will change the memory allocation on the server. Then determine how much you need for the procedure cache and then dump everything else into the data cache.

 

Looking forward to tomorrow and I will post again.


December 2, 2008  2:46 AM

Sybase Server Training

Colin Smith Colin Smith Profile: Colin Smith

My work is sending me to Sybase Training in Dublin CA. I am on my way to SFO and I will be using the BART system to make my way Dublin for training. I am looking forward to this class as I have not yet had any formal training in Sybase. I have been to one entry level class in Oracle, but Oracle and Sybase are two different beasts all together. I think that I will get more out of this class as I feel that I have a better understanding of Sybase at this point and the class that I am going to be attending is really the second class offering from Sybaase. My team lead and me sat down and looked at the classes and we both determined that the first class would not be very beneficial to me as I have already done most of the things that the class covered. Even this class I think I have experience in some of the topics to be covered but not all. If you are interested in Sybase training you can go to http://www.sybase.com to register for a class. I will be blogging every night about the class after class. I am hoping to share some of the knowledge with you as well as reinforce the knowledge for myself. I will be posting tonight after class.

 

One other thing that I must say is that the BART System in San Francisco is great.


November 14, 2008  8:25 PM

Powershell – Get a list of Exchange Users

Colin Smith Colin Smith Profile: Colin Smith

In my last post I talked about using the Exchange Management Shell to get a list of users that needed to be placed into a group. Here is what we did. First we had to determine how we could find just the users that were of interest to us. We know that we had set two Custom Attributes on each of these users mailboxes so we decided to use that as a search filter. This is a pretty simple one liner but very effective and powerful.

Get-mailbox | where-object{$_.CustomAttribute15 –eq “some value”} | format-table Name | out-file “path to file”

So that looks at all mailboxes on the domain and will return only the names of the users whose mailboxes have CustomAttribute15 set to the value that we were looking for. Now that I have that list I am ready to move on to putting those users in the group. If I added the snap in for the Quest AD Command lets then I could have just used another | to that command and not even needed the file. We wanted that for records and to verify that the correct users were being modified.


November 14, 2008  7:58 PM

Powershell – Add a User to a Group

Colin Smith Colin Smith Profile: Colin Smith

Today I was asked to help a fellow administrator get a list of users and add all of these users to a specific group. The users that he was interested in were all the users that he had moved to Exchange since starting his migration process. In order to get this list of users we used the Exchange Management Shell and the get-mailbox command to get the list. Once we had the list we needed to add these users to a specific group. We could have used ADSI to do this but I thought it was a great time to learn more about and use the AD Command-Lets that Quest Software has provided for free. So now we have the list of users and we formatted it to only give us the lastname, firstname. Now I added the snap in for the new command lets and we were off. Here is what we did and it worked just great.

 

$users = get-content “List of users”

Foreach($user in $users)

{

    Add-qadgroupmember –Identity “CN=GroupName,OU=OUName,DC=DCName” –Member $user

}

 

 

Just that simple and we added about 450 users to the group that we needed. This was a fun one to do and nice to learn more about the Quest tools. They did a good job on the AD Command lets and I appreciate them making them available for all. If you have any questions or comments about this please let me know by leaving a comment here or by heading to my website SysAdminSmith.com and submitting a question to me. I will get back to you either way.


November 11, 2008  10:58 PM

MS SQL Server Monitoring Part 4

Colin Smith Colin Smith Profile: Colin Smith

It has been a while since my last post about MS SQL Server Monitoring using Powershell. This is because I found that I was not catching all the errors that I wanted. I was missing any jobs that were failing. SQL Server gives you the ability to output job failures to the Windows Event log but not into the SQL Server ERRORLOG file. So I decided that I wanted to monitor the Application Event logs on the servers as well. This was not as easy as I had hoped. First I have to get a connection into the event log file and then I can scan the file. I also only want to scan the event log from the last time that it was scanned until now. This took some creativity and some working with Data Data types. Anyway, that is enough about talking about the problems. Let’s look at some code.

##################################################

## Search Server Application Log for SQL Errors ##

##################################################

$elog = “Application”

if ($instance -eq “Null”)

{

    $instancename = “mssqlserver”

}

else

{

    $instancename = “MSSQL`$$instance”

}

echo “the Instance name is $instancename”

$time = Get-Content “h:\$folder\time.txt”

Echo “Looking for errors since $time`n”

$objlog = New-Object system.Diagnostics.EventLog($elog, $machine)

$logentries = $objlog.get_entries()

foreach ($logentry in $logentries)

{

    $eltime = $logentry.timegenerate | Get-Date -uformat “%m/%d/%Y %R”

    $Source = $logentry.source

    $message = $logentry.message

    if (($source -eq $instancename) -and ($eltime -ge $time))

{

        $ignoreit = 0

        foreach ($estring in$estrings)

{

            if ($message -like “*$estring*”)

{

                $ignoreit = 0

                foreach ($istring in $ignore)

{

                    if ($message -like “*$istring*”)

{

                        $ignoreit = 1

                    }

                }

                break

            }

            else

{

                $ignoreit = 1

            }

        }

        if ($ignoreit -eq 0)

{

            #Echo “Error Found”

            $mark = 1

            $descript = 1

            echo $folder >> “H:\Alert\Alert”

            if (!(test-path “H:\$folder\monitor.out”))

{

                New-Item -type File “H:\$folder\monitor.out”

            }

            Echo $message >> “H:\$folder\monitor.out”

            Echo “Error Found for $Source at $eltime”

        }

        else

{

            #Echo “No Error Found”

        }

    }

}

So I am looking at $elog and I have that set to Application. You can change this variable to look at any of the event logs that you would like. I also have to tell it what the last time was that I looked for an error. I do this by settting up a file in $folder that will keep track of the time of the last entry that I have looked at. I initially have this set to mm/dd/yyy hh:mm. I read this in and then I have to read in the time of the entries that I am reading. I convert that in order to do the compare of the last timestamp of the last entry. Once I do that I can also compare the source of the error to the instance name that I am searching for. If the entry meets the criteria then I can start looking for strings that are in my error_strings file and ignoring errors that have a string that is also in the ignore_strings file. After that it is pretty basic. If I find an error I put that in the monitor.out file and I put an entry in the alert file for this server. That will trigger the notifications until I or somebody responds to the error. Let me know if you have any questions or comments. Leave them here or stop by http://sysadminsmith.com and submit a question to me.


November 6, 2008  11:17 PM

Powershell Alternate Credentials

Colin Smith Colin Smith Profile: Colin Smith

I received a question from Bryan about my Post on how to use Powershell to Monitor Disk Space. He was curious how he could pass alternate credentials using Powershell in order to connect to some servers that are sitting out in a DMZ. First I want to thank Bryan for this awesome question and thank him again for heading over to http://sysadminsmith.com and submitting the question. You can also leave comments here and I will do my best to get to them all. Anyway, I knew that Powershell has a commandlet of get-credential and that this will open a windows authentication box for you to enter username and password. We do not want to have to manually enter that credential every time though. So here is what I did.:


PS C:\Documents and Settings\smithc> Read-Host “Enter password to secure:” -AsSecureString | ConvertFrom-SecureString | Out-File c:\securepass.txt
Enter password to secure:: ********
PS C:\Documents and Settings\smithc>

So that was not to bad and here is what is in my file now:


You can see that this is not what I put in for my password. This is awesome!! And check this out. Lets say I get access to this machine and I want to know what that password is. Not only can I not read it in a text editor but even if I get Powershell to get it for me I can still not see the password. Check this out:

I love that. Anyway, now we have a password that is a secure string and we want to use that for out Credential. Here is how we do it.
$pass = get-content c:\securepass.txt | convertto-securestring
$creds = new-object -typename System.Managemnet.Automation.PSCredential -argumtents “Username”, $pass

Is that cool or what? I love it stuff….

So now here is the code with the new pieces as well as the changes in red.

##########################################

###        Gather Disk Information     ###

##########################################

Clear-Content “D:\Scripts\Powershell\PAC\lowdisk.txt”

$i = 0

$pass = get-content c:\securepass.txt | convertto-securestring

$creds = new-object -typename System.Managemnet.Automation.PSCredential -argumtents “Username”, $pass

$users = “email@email.com“, ” email@email.com

$computers = Get-Content “D:\Scripts\Powershell\PAC\computerlistall.txt”

echo “ServerName        Drive Letter    Drive Size    Free Space    Percent Free” >> “D:\Scripts\Powershell\PAC\lowdisk.txt”

echo “———-        ————    ———-    ———-    ————” >> “D:\Scripts\Powershell\PAC\lowdisk.txt”

foreach ($computer in $computers)

{

$drives = Get-WmiObject -ComputerName $computer Win32_LogicalDisk -credentials $creds | Where-Object {$_.DriveType -eq 3}

foreach($drive in $drives)

{

$size1 = $drive.size / 1GB

$size = “{0:N2}” -f $size1

$free1 = $drive.freespace / 1GB

$free = “{0:N2}” -f $free1

$ID = $drive.DeviceID

$a = $free1 / $size1 * 100

$b = “{0:N2}” -f $a

##############################################

##    Determine if any disks low    ##

##############################################

if (($ID -eq “D:”) -or ($ID -eq “S:”) -or ($ID -eq “T:”) -or ($ID -eq “C:”) -and ($free1 -lt 1))

{

echo “$computer        $ID            $size        $free        $b” >> “D:\Scripts\Powershell\PAC\lowdisk.txt”

$i++

}

}

}

####################################################

##    Send Notification if alert $i is greater then 0         ##

####################################################

if ($i -gt 0)

{

foreach ($user in $users)

{

echo “Sending Email notification ro $user”

$smtpServer = “smtp server”

$smtp = New-Object Net.Mail.SmtpClient($smtpServer)

$emailFrom = “fromuser@domain.com

$subject = “Email Subject”

foreach ($line in Get-Content “D:\Scripts\Powershell\PAC\lowdisk.txt”)

{

$body += “$line `n”

}

$smtp.Send($EmailFrom,$user,$subject,$body)

$body = “”

}

}

So I hope that this answers Bryan’s question and I hope it will help out some others. Let me know if you have question or comments about this. Leave them here of check out my site at http://sysadminsmith.com.


Forgot Password

No problem! Submit your e-mail address below. We'll send you an e-mail containing your password.

Your password has been sent to: