OK, you are over thinking it. Having multiple copies of the same stored proc, or multiple copies of the same database on the same server won’t do anything but make the think harder to manage.
SQL Server can handle thousands of users all calling the same procedure at the same time.
Everything in SQL Server is handled via what’s called a scheduler. The scheduler is basically a thread within the database engine. Each CPU core gets its own scheduler. The scheduler can technically only run a single process at a time, however if a process can’t access something (like it is waiting for a page to be returned from disk) then the process is kicked off the scheduler and another process starts running and the process that was kicked off the scheduler has to go and wait at the back of the queue for resources again.
Even with complex joins if your tables are properly indexed and your query is written correct you should be able to get subsecond response times.
CLR typically doesn’t scale very well. I know that when I worked at MySpace we pushed out a very simple CLR stored procedure to the farm, and the farm crashed in like 45 minutes or something.
SQL Server can support a huge number of users at a time. Where you’ll run into problems is if you have so many connections to the database server that you run out of TCP ports. This takes a while as you can have about 60k TCP ports in use at a time. This translates to a lot more users than that as you’ll be using connection pooling and users aren’t constantly connected to the database.