SQL Server with Mr. Denny


April 16, 2018  4:00 PM

But why won’t you sponsor at my event?

Denny Cherry Denny Cherry Profile: Denny Cherry

Being a vendor/exhibitor at a few conferences has been an eye-opening experience for sure. Sadly exhibitors can’t sponsor at all the events that are available. It used to be the case that vendors could sponsor at them all, but back then there were just a few conferences that would be related to the vendor so it was “affordable”. Today with SQL Saturday events. Code Camps, User Group meetings as well as the large conferences like Build, Ignite, etc. fighting for those marketing dollars is a lot harder than it used to be.

What’s it worth?

Vendors, both small and big, have to get someone out of the event. With most of these events, the name of the game is email addresses to add to their mailing lists (you didn’t think booth swag was free did you?) and if the vendor already has sponsored an event in the past, especially over a couple of years, then that vendor probably has the contact information for most of the attendees at the event. If there’s a large percentage of attendees that are new to the event each year, I’d recommend highlighting that on your sponsor prospectus that you send out to vendors to get them to sponsor.

Speakers are not sponsors

The other major thing that events have going against them is treating speakers like sponsors. Now I’ve been a speaker at events for a decade, and I’ve always drawn a clear line between being a speaker and making myself an exhibitor for free (or getting paid for being an exhibitor when I have a precon). Some speakers haven’t always done this, and some have gotten rather blatant about it, with the events that they are presenting at doing nothing to curb those speakers from getting the benefits of being an exhibitor without paying for the privilege. Recently a speaker was collecting contact information from attendees during their precon to give those attendees a recording or the session. Those attendees that gave over their information were leads to the speaker, the same kind of leads that an exhibitor would be paying a large amount of money for on the exhibit hall floor.

People are going to say that events are welcome to allow speakers to act like exhibitors. And they absolutely are, those events should also not be shocked when sponsor money dries up. As a vendor, there’s nothing that says that I must sponsor some events and if as a sponsor I don’t like how myself and the other sponsors are being treated then I’m free to take my sponsor dollars elsewhere. This is one of those things that events only get to do once. Once an event has a reputation with the various sponsors and exhibitors that reputation is going to stay for a while, even once the speakers are no longer being treated like sponsors.

Sometimes speakers need to draw that line between speaker and sponsor themselves. That line is swag and selling/giving away goodies to attendees. Now I know that people love getting SWAG, and I know that people love giving SWAG, but people who aren’t paying the event to give away SWAG, and yes that’s basically what vendors are doing, shouldn’t be. Events cost money to run, usually a lot of money. How events get that money is from their sponsors. If sponsors don’t feel like they are getting their money’s worth from the event, then the money will go away along with the sponsor and the event may not have the cash on hand to run the event again. Suddenly that’s a lose, lose proposition for everyone.

Denny

April 9, 2018  4:00 PM

Denny’s first public speaking event post surgery is…

Denny Cherry Denny Cherry Profile: Denny Cherry

In case you didn’t catch it, I’ve been out for a while, with good reason.  Apparently, emergency brain surgery takes a while to recover from.  I’m not back to 100% yet, but I’ll be speaking at my first event post-surgery, and it’s this weekend at SQL Saturday Orange County.  I’ll have Kris, Joey, and John with me at the speaker dinner and after party on Saturday evening.  I’ll be talking about on-prem storage and maybe a little Azure/Cloud storage as well.  Overall, it’ll be a great day to see everyone old friends and meet some new ones.  So get registered for the SQL Saturday, and we’ll see you there.

Denny


April 5, 2018  3:00 PM

5 reasons Cloudflare’s roll-out of 1.1.1.1 has been a disaster

Denny Cherry Denny Cherry Profile: Denny Cherry
CloudFlare

I get where Cloudflare was going with their 1.1.1.1 DNS server, but the rollout, in my opinion, has been a disaster.

Cloudflare being the cloud

1. Caching

For starters, most people are running DNS servers at home. They may not know it, but they are.  Odds are your router is running DNS for you, and it’s probably pretty quick.  Even if your router in 40ms, CloudFlare’s DNS is boasting 15ms response times.  This first lookup is a little faster, after that, it’s all cached.  After the first lookup your computer caches the DNS entry locally, so you’re saving 25ms or so (in this example) once.

2. Profit?

Cloudflare also claims that their service stops your ISP from seeing where you are surfing on the web. It doesn’t. I spent years working for an ISP.  We were moving the packets to your website, and we knew what websites you were going to even without tracking your DNS lookups.

Cloudflare claims that they’ll be deleting the logs, and not selling any data collected by this service.  Now I don’t have an MBA, but where’s the profit? Running a global DNS service isn’t free, or even cheap. Companies don’t do this out of the goodness of their heart. They have to make a profit on services, or they pull the plug on them.  So something has to be making money, or the board of Cloudflare will get sick of funding this real quick.

3. ISPs

Several ISPs are blocking access to 1.1.1.1.  I know that my ISP at home does. I get a lovely “Unable to connect” error in Firefox when I try and browse to the website running on 1.1.1.1. And yes I know it isn’t my machine as it works fine when I VPN into our CoLo which has a different network provider.   There are several other ISPs that are blocking this access as well.  Years ago I worked for an ISP, and we knew where every customer went, not because of DNS, but because we were capturing the headers of the network packets so that we could find response problems on sites. It really wouldn’t be hard to tie this data back to a user. Knowing what IP you got from DNS really wouldn’t stop us from tracking you on the Internet if we wanted to.

And they do this, I assume, for reasons which are talked about in the blog post from Cloudflare.  There’s a lot of junk data being sent to these IPs, so a lot of ISPs are just blocking access to these IPs to make their life easier and safe themselves some network costs for sending that data.

The blog post that CloudFlare released tasks about how Twitter was used during the Turkish uprising and people got around the countries blocks by using Google DNS instead of the in-country DNS.

Swift Does Security on Twitter talking about CloudFlareThis shows that the blocking done by the country was lazy, not that DNS from Google fixed this.  If Turkey (or another country) wanted to block access to Twitter no matter what DNS you’re using, blocking access to 104.244.42.0/24 (or whatever IP range comes up for the public IPs for the country that wants to block the service).

4. Login Pages

On top of that, several hotels, hospitals, convention centers, etc. use 1.1.1.1 as the login page for their portal, so they block external requests for that IP.  One of the reasons that everyone uses that IP for their login page in right there in the Cloudflare blog post. That IP wasn’t publically used into this service from Cloudflare since so much junk was being sent to it. So because of that lots of people use it, or block it.   You can see this right on Twitter where SwiftOnSecurity shared a DM from a network engineer.  Should they be using this? Maybe.

We can’t expect to have every company that’s using 1.1.1.1 to reconfigure their network because Cloudflare decided to start offering this service.  This is even the default for some Cisco models that are deployed around the world. I know that in a variety of hotels (and the hospital I was in last year) 1.1.1.1 was their login portal for their Wi-Fi.  If I set my DNS to 1.1.1.1 on my laptop and went to any of these sites, I wouldn’t be able to browse.  Stopping people from using their computer without a configuration change is a problem.

I get that the 1.1.1.1 IP isn’t a reserved private IP, but there are RFCs, and there is the real world. And in the real world that IP is in use in private networks all over the world, and it’s known that it is in use.

5. Ownership

Would you be surprised to see that Cloudflare doesn’t own the IP space used by their DNS service?  I sure was.  The two public addresses that have been published are 1.1.1.1 and 1.0.0.1.   Those are both owned by
APNIC Research and Development, which means that APNIC could decide that Cloudflare is done and APNIC could simply shut down the service with no notice to Cloudflare or the users.  And since Cloudflare doesn’t own the IP addresses, there’s nothing that Cloudflare could do if this happens besides having a PR disaster.

Should we block?

Now I’m not saying that places should be blocking access here. But if I was a dictator looking to keep my people from getting online, there are much easier ways than blocking DNS (I’m assuming details like this are left up to some systems team somewhere).

Will all this get better? Cloudflare says that it will. I don’t see this getting much better. We’re talking about reconfiguring a large number of hotel, convention centers, hospitals, etc. with little to no benefit to them.  We as a technology community have been trying to get IPv6 in place for 20 years, that still isn’t even close to happening, and that’s a much smaller number of companies that have to reconfigure things.

Denny


April 4, 2018  7:01 PM

A new better way to buy Azure SQL DB

Denny Cherry Denny Cherry Profile: Denny Cherry
Azure, SQL

Today Microsoft has announced that there is a new way to buy Azure SQL DB. If DTUs aren’t making sense to you, you’ll be happy to know that you can now simply select how many vCores you want for your SQL DB workload.  Now this will still require that you have an understanding on your workload to use this new vCore based way to buy Azure SQL DB, but Cores are a concept that is easy for people to talk about and wrap their heads around. Now this new model is only in preview at the moment, but I’m guessing that it’ll be around for a while in preview, then it’ll go GA as this new model makes sense.

Personally, I see people moving to this new model instead of the DTU model as the vCores model is just easier for people to figure out, and explain around the office. Also, there’s no math to convert what you have today into DTUs.

Another nice benefit of using vCores for SQL DB is that if you have SA, you can use your existing SQL Server licenses to get a discount on your SQL DB pricing.   Now, this does require that you keep your software assurance current on these licenses, so that may end up eating at least some of your savings.  A lot more math will need to be done to see how this works out.

As you can see from the screenshot from the portal, you can select the number of cores that you want for your use case and the maximum size of the database for your use case.   vCores give many customers a much more scalable solution then they had used just DTUs.  You should see the new vCores in the Azure portal shortly (I can’t see them in my normal portal yet) and they’ll be available in all regions as they roll out.

The big question for people will be what makes the most sense? And that answer is going to be vCores.

The other big question that I see if how many vCores do I need for X DTUs? That answer will be a little greyer.  It’s going to take some trial and error when moving back to find the sweet spot between vCores and DTUs so that you have enough resources without paying more than you need.  The nice thing about the cloud is that you can ramp up and down quickly and easily depending on what you see as your applications response time.  For most customers, I’d recommend starting with a larger number of vCores and if performance in the portal (or SCCM if you have it) is low for that database then ramp down and see how it responds.

On top of the new vCore based solutions that Microsoft announced today, you also have your managed instance option available. The managed instance option can only be sized based on the new vCore solution as you have no option for the number of DTUs that will be supported.  Managed instances are the solution that Microsoft provides that’s between IaaS instances in VMs and SQL DB where Microsoft hosts the databases for you, and you can do most everything that you can on-prem or in IaaS, but in a PaaS model.  Realistically the only way this option was have made a lot of sense was with the vCore option that Microsoft is announcing today.

Overall I think this vCore option is going to make a lot of sense, and realistically should have been where Microsoft started from instead of DTUs.

If you are looking to move into SQL DB, DCAC can help you get there. Using either the DTU model or the vCore model.

Denny


April 2, 2018  4:00 PM

Why Don’t Universal Groups Work in SQL Server?

Denny Cherry Denny Cherry Profile: Denny Cherry
SQL Server
Locked Bike

https://www.flickr.com/photos/123327536@N08/23891946434

If you’ve tried using Universal Groups in Active Directory to access your SQL Servers, you may notice that the users who are members of these groups can’t access the SQL Server Instance. The reason for this has more to do with active directory than with SQL Server. Normal groups in Active Directory are cached so authentication requests can return groups that the user is a member of as part of the Windows Token. Universal groups, however, aren’t included in the Windows Token as the Universal groups that the user might be a member of might not be in the same domain that the request is handled by.

The internals of why Universal Groups don’t work requires a decent understanding of the internals of Windows Authentication Tokens and Windows Security. But needless to say, all that you need to know is that Universal Groups don’t work with SQL Server.

Because the Universal Groups aren’t in the authentication token when the SQL Server goes to see if the user has access, the token says that the user doesn’t. The fix for this is quite easy, use a different Windows Domain Group type than Universal Groups.

Denny


March 26, 2018  7:36 PM

Azure precon at SQL Grillen in June

Denny Cherry Denny Cherry Profile: Denny Cherry

I’m thrilled to announce that we’ll be hosting a pre-con at SQL Grillen this June in Lingen, Germany titled “Designing Azure Infrastructure for Data Platform Projects” which you can see on the events schedule page. The abstract for the session is:

In this daylong session, we’ll review all the various infrastructure components that make up the Microsoft Azure platform. When it comes to moving SQL Server systems into the Azure platform having a solid understanding of the Azure infrastructure will make migrations successful and making support solutions much easier.

Designing your Azure infrastructure properly from the beginning is extremely important. An improperly designed and configured infrastructure will provide performance problems, manageability problems, and can be difficult to resolve without downtime.

As Azure scales around the world many more companies, no matter where they are located, will begin moving services from on-premises data centers into the Azure Cloud, and a solid foundation is the key to successful migrations.

With four new regions announced in Europe, bringing the total of European regions up to 12, more and more companies will be looking at the cloud for the hosting needs. Seating for this day-long session is limited, so you’ll want to register right away to ensure you get a seat at the session.

Denny


March 19, 2018  4:00 PM

Denny Cherry & Associates Consulting has two VMware vExperts

Denny Cherry Denny Cherry Profile: Denny Cherry

VMware recently announced they’re vExpert list for 2018 and I’m proud to say that both Joey and myself were on the VMware list for 2018. The VMware vExpert program is designed to recognize those in the VMware community who help out with the VMware community at large. While the vExpert list looks rather large, it’s a big world and there’s a lot of people working around the world helping VMware users to setup various VMware products. I know that both Joey and myself and thrilled to have been awarded this award for 2018.

Denny


March 12, 2018  4:00 PM

I want VMs in Azure to be members of my on-premises domain. How do I do this?

Denny Cherry Denny Cherry Profile: Denny Cherry
Azure, Virtual Machines

Having VMs in Azure which are members of your on-premises domain is a pretty important part of your Cloud implementation.  There’s a couple of ways to do this, but I’ll be covering what we at DCAC think is the best option.  In basic terms, you’ll set up a VPN from the Azure vNet to your on-premises network.  From there you’ll want to add domain controllers to Azure which are members of the on-premises domain (not just another domain with the same name). Then change your vNet to use those DCs as your DNS servers, reboot your other VMs, and you’ll be able to add the other VMs to your Active Directory domain.  Here are some details:

The first step is to put some domain controllers in Azure.  To do this, you’ll need a site to site VPN between Azure and your on-premises environment.  If you have multiple on-premises sites, then you’ll want to create a VPN between Azure and all your on-premises environments.  If your A zure environment is hosted in multiple regions, then you’ll want to create a mesh network when each on-premises site in VPNed into all of your vNets.  You’ll probably also want your vNets VPNed to each other (Peering of your networks between sites may be an option as well depending on how you’ve set things up).  If you have an extremely large number of users at your site, then Express Route might be something worth looking into instead of a site to site VPN.

https://www.flickr.com/photos/ilamont/4150684641/

Once the Site to Site VPN (or Express Route) then you can focus on putting some domain controllers in Azure.  Each site within your Azure environment should have at least 2 DCs, and they should be created within an Availability Set, or an Availability Zone (depending on what your standard is going to be for setting these up).  You can now set the vNet to use the office DCs as DNS Servers. Once that’s done, reboot the Azure VMs that you want to make domain controllers and promote them to DCs.  When making them DCs, you’ll probably want them to be a fairly large VM size so that the promotion process doesn’t take that long. You can resize them later.  Once the VMs in Azure have been created as DCs, you’ll want to make those VMs have static IP addresses (whatever IPs they have are fine).  Make a note of these IPs as you’ll need to enter them in a second.

Once the DCs are setup go into the vNet configuration and set the DNS servers for the vNet and change the vNet to use the new Azure DCs as your DNS Servers (you wrote these down at the end of the prior paragraph).  Then reboot any VMs that you’ve created already in the vNet.

At this point all the VMs that you have already created before now can be added to the domain without issue just like any other machine in your environment.

Denny


March 5, 2018  4:00 PM

What should MAXDOP be set to?

Denny Cherry Denny Cherry Profile: Denny Cherry

https://www.flickr.com/photos/126080172@N03/14901447725/

A lot of the examples you’ll see for MAXDOP on the web assume a large server with multiple physical sockets and multiple NUMA nodes.  But if you have a smaller server, like a lot of us do these days, what should you set the MAXDOP to?

The basic rules still apply, that you should set the MAXDOP to 1/2 of the size of your NUMA node. Just because you’re running a smaller server doesn’t mean that you don’t have NUMA configured, it just means that you have a single NUMA node.  So for example if you have a server with one socket and six cores, then your MAXDOP should probably be set to 3; since three is 1/2 the size of the server.

Now there’s a lot of it-depends that goes with this, but this gives you a starting point. You might need to decrease MAXDOP to 2 and see how this affects the server.

Keep in mind that making changes to MAXDOP during the business day isn’t recommended as making changes to MAXDOP will flush the plan cache and cause all the queries to be recompiled.

Denny


February 26, 2018  4:00 PM

Can I Use Azure Threat Detection On-Premesis?

Denny Cherry Denny Cherry Profile: Denny Cherry
Locked Bike

https://www.flickr.com/photos/123327536@N08/23891946434

The short answer here is no.  The threat detection features that you see in Azure are not available in the on-premises product. This includes VMs running SQL Server instances in Azure.  The only way to get the SQL Server threat detection features that Azure offers is going to be to use the SQL DB feature in Azure.

 

Denny


Forgot Password

No problem! Submit your e-mail address below. We'll send you an e-mail containing your password.

Your password has been sent to: