An explosion of desktop software.
I teach database systems and 3D animation at my university. The other day I was telling a friend about some of the database, graphics, software development, and other applications I work with. I was trying to make the point that over the last few years there has been an explosion of powerful, novel desktop software applications, that it’s a fun time to be a software user. With each one, I told him what company sold it and where they were located. His eyes widened.
What’s wrong? I asked.
He told that he was impressed at how many places around the world were apparently producing cutting edge software applications. Now, he’s not a computer guy. I told him something that people who use a lot of software typically discover on their own. Since software can be built on cheap computers, marketed on relatively simple websites, and sold as downloads, anybody anywhere can be a successful software vendor. All you need is knowledge, skill, and drive.
Software is a global product.
Here’s a piece of the list I gave him. I use the following apps:
– DBVis, a relational database GUI from Iceland (my favorite DB GUI).
– SQL Maestro, a relational database GUI from Russia.
– Navicat, a relational database GUI from Hong Kong, China.
– Indigo, a renderer from New Zealand.
– Maxwell, a renderer from Spain.
– Toonboom, a 2D animation application from Quebec.
– Vue, a graphics application for producing 3D outdoor environments from France.
– QT, C++ development software from Norway.
– Komodo, a scripting editor and IDE from Vancouver, Canada.
– Versions, a subversion client that, as far as I can tell, is from Portugal and the Netherlands.
– Voila, a screen capture application from India.
– Pixelmator, an image editing application from England.
I’d also like to point out that these are all extremely good products. They vary a lot in their complexity and cost, but each of them are first rate. (I also apologize if I have any of the home countries wrong.)
It’s also true that a lot of the larger, emerging software and services firms are from countries like India and China. It’s also true that many open source applications are developed and maintained by teams from around the world, collaborating online.
Now, here is my observation as a prof at a big state school.
We are training fewer and fewer Americans to be software professionals. In the United States, it’s not a profession that’s highly respected. Programmers are supposed to be nerds, and their work is portrayed as lonely and tedious. American kids want to be film makers, reality TV stars, lawyers, and doctors. So, enrollments in computer science programs have dropped. A large percentage of computer science graduate students in the United States come from overseas. (A major reason I like my job is because I get to interact with young people from about the world.)
It’s not surprising that more and more cutting edge software development is going on outside the United States.
And hey, computer professionals from the United States are starting to find that a lot of the best jobs, those with dynamic, innovative companies are not in the U.S.
So, software professionals are citizens of the world.
I try to devote this blog to issues relating to the emerging Semantic Web and Web 2.0/3.0 technologies. But I’m digressing here to put in a pitch for … Microsoft Windows!
Windows versus Macs.
The issue involves fixing problems. I happen to be a Mac user and know my way around its Unix guts pretty well. I happen to like the fact that on a Mac, there is no Achilles heel, like the registry on Windows machines.
Applications install dramatically faster than on a Windows Machine, too.
And, Windows machines come out of the box all set for light users, people who run Office apps, process email, and browse the Web. If you want to run more heavy duty applications, ones that don’t have click-and-let-it-happen installers or of you want to develop software on a Windows machine, you often find yourself in a chain reaction process where you incrementally discover more and more things you need to install. Even installing Microsoft development software, like Visual Studio, can turn into a couple of hour adventure. (But Visual Studio’s installer generally takes care of everything, and all you need is patience – a lot of patience.) Macs, on the other hand, have a free, heavy duty development environment called XCode that you can download and just stick on your hard drive. Various languages and database products are preinstalled.
I admit, though, that building Web apps on a Mac is a problem. The development environment doesn’t provide a lot of help there.
But, I recently ran into some problems with my Windows 7 machine and was actually impressed at its robustness. I don’t use Windows machines much and I don’t understand their guts that well. So, I am a nervous user, waiting for that registry problem or other seemingly minor issue that keeps the thing from booting. I have to use a Windows machine, however, because I teach database systems, and a couple of the major products do not run on Macs at all. I also teach animation, and there are a couple of powerful animation apps that do not run on Macs, either.
So what happened?
It all started when I decided to install some software development apps, and quite frankly, since I didn’t understand the Windows OS that well, I screwed things up. (I won’t go into the details because I am embarrassed.) Suddenly, my machine wouldn’t boot – but it automatically went into its disk check mode and fixed the problem. It took an hour or so, but it healed itself. To be honest, I don’t even know what it fixed exactly.
System state restore.
Then, with a sense of great relief, I dived in again, this time, really did a nasty job. I blew out my user profile. I also did something that caused Windows Explorer to conclude it didn’t have any legit access to anything in my file system; only Firefox could download something and store it. A few database development apps could no longer find their data; my connection profiles and SQL code was gone, seemingly.
So, I ran the Windows system restore facility. It told me exactly what applications would be gone when I rebooted. It fixed itself and I reinstalled a couple of things.
System image restore.
But I didn’t stop there. I made another mess and this time, I didn’t just cause some data to get disconnected from their applications. I deleted a bunch of critical stuff myself, somehow. Some of it was in the OS – ouch!
So, I ran a complete system image restore, something I had been carefully building on a weekly basis for a year or so. I didn’t really think it would work. It did.
You can kill it and it rises up.
I discovered something else along the way. My memory with older versions of Windows (I used Windows machines until about the time XP came out) was that if you pulled the power on the thing, or if it froze and you had to reboot it forcibly, there was a good chance that it would refuse to start up, that something being written at the time of the crash was incomplete, inconsistent, and deadly.
But you know, I’ve hard rebooted my Windows 7 machine a bunch of times. It’s always come up. Wow.
I received a comment on my last blog entry, where I talked about the fact that it’s hard for students to steal material for their 3D animation projects because I poke around on the Internet pretty aggressively myself, looking for tutorials, as well as example models and scenes. The comment I was sent pointed out two things. First, professional programmers make use of programs and pieces of code they find on the Web. Second, there is a fine line between making legitimate use of content from other sources and essentially copying someone else’s work. The critical thing, he pointed out, is that you need to reference the work of anyone who influenced your work. This includes artistic things, like 3D models, and not just text.
General knowledge: finding it and referencing it.
Certainly, this is true and faculty members know this. Across all academic disciplines, faculty members are well aware that they need to be Web-savvy and give assignments that take into account the ease with which relevant material can often be found. Our attitude shouldn’t always be that everything a student writes should be completely original. How do you find raw material out there? How do you vet it? How do you acknowledge information that does not appear in any traditional archival or news publication? We know all this.
Software components: what’s different?
But the situation with computing is somewhat different and I think very intriguing.
As a professor of computer science, I can tell you that we need to spend less time teaching students how to write raw code and more time teaching students how to cobble systems together from existing, and often complex components. This is without even considering property rights or the ethical responsibility of referencing the resources you have used. It’s a raw technical fact of life. Programming simply isn’t a write-it-from-scratch thing anymore. We send students out there often with very little knowledge of the vast world of existing software development tools and code. They have to work in this world, period. We can’t ignore it when we train them.
Widespread, anonymous collaboration.
There are countless applications out there that would not exist today because they would have been technically intractable to built from scratch or not cost effective to build. So we do all need to work together, even if it is anonymously.
In fact, “novel” code that does not build on what already exists is often more than just too expensive. It is often unmaintainable tehnically.
But should our gold standard for students simply be getting it done, pulling together a system that works properly and will have a long shelf-life? As long as property rights are not being violated, I say yes. I’m not a lawyer, and to be honest, a lot of us who teach computing don’t actually know what is legit and what isn’t. People put surprisingly valuable pieces of code on the Web quite voluntarily. Maybe it’s not theirs to give away. Maybe it belongs to their employer, I don’t know. The stuff that ends up in official open source software development tools seems to be tracked most of the time. We know where it comes from. Right?
I don’t actually know. I’ve never questioned anything that Eclipse has installed for me or PostgreSQL has installed.
There’s also a feeling out there, a very widespread feeling, that the notion of give and take has changed. We don’t blindly protect tools and code. We know that we’re all better off if we share aggressively. We also know that we can make money indirectly. If you are the acknowledged developer of a highly popular software tool, you can often capitalize on this in a big way.
There are people who give things away and then regret it.
And once it’s out there, it’s usually impossible to get it back.
What should we be telling students about all of this?
I teach courses in two very different areas: information systems and 3D animation. The first is my area of expertise. The second is a glorified hobby, and I teach it only because I was asked repeatedly by students to do so. There is a high demand for animation classes, but very, very few offerings at my university.
Cheating in universities.
They say there is a lot of cheating at universities these days. Why? Well, the conventional wisdom is that students are demanding high-powered courses and equally high grades, but they don’t want to work very hard. So, cheating seems to be absolutely necessary.
I don’t actually believe all this. I’m not that jaded, not that cynical. Students will work hard if they sense that a professor is working hard to teach them. I think wide-scale cheating, if it actually does occur, is in response to the growing tendency for universities to treat teaching duties as a punishment for professors who do not bring in enough federal research dollars. Students are not stupid; they know what is going on.
We can catch them, if we bother.
But, back to cheating. I’m sure that some of my students share code when I assign projects in my information systems courses, and that they take code out of various online resources. To be honest, I don’t generally check for this, even though there are programs that teachers can use to compare code (and English documents for that matter) from multiple sources to check for such borrowing.
Passion dictates when we bother and when we do not.
However, perhaps because it is something I enjoy and not something I see mostly as my main area of competence, I do end up checking for cheating in my animation classes. I do it by accident, because I am constantly looking at all the places where they are looking. I’m not trying to catch cheaters. I’m just excited about learning.
A world of unlicensed experts.
Indeed, this is perhaps a little noticed consequence of the new age where we are inundated with information 24/7, and in vastly higher volumes than we could possible consume. Some people pick an area, perhaps subconsciously, and dig in with raw compulsion. They cannot stop themselves until they seem to have seen it all, until the almost infinite reach of the Web seems to circle back on itself. We are an emerging world of unintentional and non-credentialed experts. It’s amazing, really. People who are around us on a daily basis have vast, silent bodies of knowledge that they don’t use in their jobs at all.
So, if you take an animation class from me, be careful. If you copy a model or a scene from somewhere on the Web, well, I’ve seen it already.
This is an update on Mozy.
First of all, they have really ramped up their customer support. My emails have been answered with phone calls. Polite people who know what they are doing have talked to me on the phone and patiently figured out what was wrong – and then seen the job through to the end.
I have had to use Mozy, for real. I have paid accounts with Mozy that cover a total of five machines (two windows, three macs), and one of them, a Windows 7 machine, retched and curled up a few nights ago. I spent many hours trying to fix it, and finally went to my Mozy account. My (huge) directory popped up in the Mozy client program in a few moments, and within just a few minutes, I had downloaded and restored what I needed, and I was up and running.
Still behaving oddly on macs:
I am still seeing some funkiness on my macs. There are three problems:
1. The status window that supposedly displays the current status of a backup will sometimes stay at zero for a full day and then seemingly do the entire upload in a few moments.
2. It is common for uploads to take a full day or two, when the same volume of uploading from a windows machine will take perhaps a couple of hours.
3. The Mozy client generally reports a volume of uploaded files in a random fashion, saying I have upload 10 MB when I have uploaded a GB, and the like.
But – as painstaking as it is, Mozy is working on my macs now.
The new web and Mozy:
This is indeed the way I think backups should be done, offsite and automatically. Many of us would rather run web apps then install cranky, bloated apps on our machines. And we would like to be relieved of having to maintain, configure, and run backups – and check them all the time to see if they really worked.
I teach 3D animation in a computer science department. I’ve been asked why. After all, shouldn’t fine arts people be teaching this stuff?
Here’s the reason: it has a lot to do with speed.
There’s a debate that has gone on in the computing world for the past few decades. Will the cost of hardware ever plummet to virtually nothing? Will the speed and capacity of memory and disks ever reach effective infinity? Will the bandwidth of the Internet ever rise to the point where the cost of a computer is negligible for the average educated person? Will the speed of computers and communication reach a point where the average educated person has no need for any more improvements?
Research sages have always shaken their heads and said no, this will never happen. We will always find ways to use better and better technology. And in fact, we will always demand cheaper and bigger and faster technology. Remember, that a couple of decades ago, folks thought we would soon have no need for large server-based database management systems with their complex, clever software that optimizes the use of precious main memory – because memory would be basically free. That has not happened. What we do have are databases that are huge and getting bigger every day. No matter how they grow, only a time fraction of them fit in main memory.
We have proved ourselves proficient at absorbing any technological advances that come along.
Here’s the salient fact: 3D animation will go from the desktop to the web app.
This includes Internet download speed. Even as bandwidth increases, we are constantly warned that the saturation point will be hit – and that Internet brownouts are just around the corner. But for those among us who are digital optimists, it seems that a technological milestone is about to be reached: we will soon be able to embed full fledged 3D animations in webpages, and users will download them in a blink of an eye. Everyday machines will have the memory and video cards to easily drive them. Three color, 2D Flash movies that serve only as window dressing or to draw a quick laugh, will no longer be the limit of online animations.
Yes, Web 3.0 applications of the near future will contain meaningful videos, ones that convey real information. They will provide technical training and abstract education. They will be interactive, too, rendered in real time – and not simply downloaded as compressed video.
So, programmers of the future will be called upon to deliver web apps that present not just images and sound and video – but sophisticated 3D animation as well.
Animation is everywhere.
I teach an introductory 3D animation class at the University of Colorado in Boulder. Since I am in the computer science department, which is in our school of engineering, a lot of my students are computer science and engineering students. The class also draws film studies, art, and geography students. College students today are well aware of the exploding, broad-based marketplace for animation-savvy professionals.
Animation and programing.
Animation is used heavily in both web and desktop applications. Importantly, a number of 3D animation applications can export Flash content, which opens the animation world to web app programmers who don’t have a knack for drawing. There are also native animation capabilities in many programming environments. In fact, drag and drop interface development is widely supported, and there is a growing marketplace for programmers who are familiar with the XML language MXML, which is used by Adobe Flash Builder. A prime competitor is the Microsoft Silverlight technology which uses another XML language, XAML. Both of these languages can be used to build desktop applications, as well. Then there is the drag and drop interface for creating Javafx-based user interfaces that comes with Netbeans.
In the mainline desktop application development world, there is the C++ drag and drop user interface capabilities supported by QT Creator, as well as the various Swing-based design interfaces support by Java development environments.
Where artists meet programmers.
These software development tools are enabling non-hardcore programmers to collaborate with more traditional programmers. In fact, there is an emerging a class of artistic-minded software professionals, and programmers are starting to sense that their territory is being invaded. To defend their turf, they are flocking to animation classes and asking that programming courses include coverage of 2D and 3D animation.
Programmers have a significant advantage, as it turns out. These drag and drop interfaces can be used to produce interface controls for applications easily enough, but they compile down to good old fashioned non-declarative code, and more code must be written to wire up the controls to specific behaviors, and to produce the server side of web applications.
In fact – and this is my point here – the front lines in the software turf war is going to start moving in the other direction, with more and more programmers trained in animation, and able to seamlessly blend animation development skills and traditional programming skills. The pure animation folks are going to have trouble competing.
The bottom line (for me, that is).
This is what I find most exciting about teaching animation: I’m serving a new generation of software professionals who are far more broad-based in their interests and skills. They’re more fun than programmers from my generation.
This blog is dedicated to emerging technology for Web 2.3/3.0, the Semantic Web, and multimedia management. Two postings ago, we looked at the dilemma of trying to teach general concepts to students interested in advanced media, in particular in the domain of 3D animation. In the last posting, we looked at a pervasive problem: Students need not just abstract knowledge, but also hands-on experience, if they are going to compete successfully in the marketplace – and faculty members typically lack applied knowledge of real world software.
In fact, direct training in the use of modern media applications isn’t just a practical consideration. In order to develop a firm understanding of the direction of modern computing and to gain insight into the problems that need to be solved by future developers of media applications, computing students need to be exposed to the breadth of media management applications.
Today, we take a look at another extreme challenge presented to teachers of modern computing.
Computing – specifically, media management – touches everything.
Computer Science departments in universities are sometimes in Engineering schools, or sometimes in Arts and Sciences. A growing phenomena is that Computer Science is not just a department within a school, but is a school of its own with a name like the School of Information.
This underscores the extreme variety in what is considered to be in the domain of modern computing, including the creation of formal mathematical models with little or no immediate basis in the real world, the development of algorithms for performing complex computational tasks, the construction of novel software applications, the study of human interactions with computers, the development of standards for specifying medical information, the application of modern software technology to crisis management, etc., etc.
The point is that computer science graduates might find themselves working in virtually any area of human endeavor.
Interestingly, the domain of media management, including image, video, sound, and 3D modeling and animation management, is a perfect example of this. It is, in fact, everywhere.
So, what to do?
The answer, I think, is that media management needs to taught in a highly collaborative fashion, with faculty members drawn from across most domains of study. The collaboration shouldn’t be superficial, as is often the case, with students choosing isolated courses from multiple departments and counting them toward a roll-your-own major. This leaves students with no idea as to how varying disciplines are woven together in the real world. Faculty members, for instance, from fine arts and computer science need to plan and co-teach courses that look at the border between art and programming.
This blog is dedicated to emerging technology for Web 2.3/3.0, the Semantic Web, and multimedia management. In the previous posting, we looked at the dilemma of trying to teach general concepts to students interested in advanced media, in particular in the domain of 3D animation.
The conflict between abstract and hands-on training.
I teach an introductory animation class at the University of Colorado, Boulder. The problem I face is that the applications (such as the industry standard, Maya) that professionals use to build 3D animated projects are extraordinarily complex, and users need to be taught how to use them. Thus, if you want to give students both a solid, broad-based education, and at the same time, give them the satisfaction of building something real and the hands-on skills they will need in the real world, there simply aren’t enough hours in an academic semester to convey all of this.
So, which is better?
Today, we consider a widely-discussed issue: What is the best thing for students? Learning to use popular tools in wide use today? Or learning abstract concepts that that presumably will give a student a solid foundation for understanding, using, and developing media applications for many years to come?
Academics will always say the second thing is their job, that they do not run trade schools.
Professors who themselves need to be trained.
Often, though, this is simply an excuse for the fact that they themselves have very little hands-on knowledge of modern software. They are too busy writing research grant proposals and pumping out papers – the things that are demanded by their employers – to stop and learn how to use the software their students will be expected to have mastered. This is true for more than just video, animation, imaging, graphics, and audio software. Believe it or not, university faculty members who teach computer science often have absolutely no experience with software development technology for building large systems, database systems, web applications, etc., etc.
The abstract training is often not happening, anyway.
There’s more. That long term training, that in-depth, subtle mastering of general concepts, often doesn’t happen, anyway. When faculty members do not know how to use media applications, they tend to not have any idea how they work internally and how they got to be the way they are. They also are generally unaware of the softs of technology media professionals of the future will need.
Indeed, there is a growing gap between the knowledge that computer science graduates need and the stuff that is actually taught at the university level. Yes, basic, core, abstract knowledge of object-oriented programming, algorithm construction, encryption, and the like, are absolute necessities, but universities could be a doing a lot more to flesh out the basic skills that, from a pragmatic perspective, cannot be ignored.
Companies cannot fill in the gap.
There’s one more thing. Corporations can no longer afford to invest a lot of money in untrained college graduates. The world has become more global, and more competitive. Nobody seems to be stepping forward to fill in the gulf between abstract university training and the mastering of hands-on skills demanded by employers.
Media applications, a key technology for today’s professionals.
We’ve looked at modern media applications in previous postings of this blog. They represent a critical core of the new world of end-user software.
The management of advanced media technology, something that a wide class of technical and nontechnical professionals must master, presents a difficult dilemma for both students and instructors. Professionals need to be able to create, edit, store, search, and reuse photographic images, video, music, voice and sound effects tracks, 2D and 3D models, web pages, diagrams, mathematical information, and formatted documents with embedded media. They need to use advanced media applications to get the job done – but they also need abstract knowledge about media management, so that they can continue to stay current, know how to find the right application for a given task, and know the limits of the software they use.
Animation presents a great example of this dilemma.
I’m a computer science prof and teach an introductory 3D animation course. Building an animation project is a complicated process and the applications that animators use are almost unbelievably complex. They have vast interfaces that take years to master. It is difficult to perform even simple tasks without being taught keystroke by keystroke how to get the job done.
It is also hard to apply skills learned in one application to a similar application. Two 3D animation applications, even if they are considered to be close cousins in terms of their functionality and with respect to how they should be used, typically present their own unique learning challenges.
Most instructors don’t want to deliver courses that leave students empty handed, feeling like their heads are full of fancy ideas but they have nothing tangible to show for their trouble. Indeed, it is very hard to teach concepts independently of teaching students how to use a specific animation application, to teach general principles and not have a class degenerate into a here’s-how-to-use-application-X session.
This explains why professional books on animation (as well as other books about media mega-apps) are so painfully prescriptive and why they are rarely used as primary university and college textbooks.
One way of getting the teaching/learning job done.
What I find myself doing is teaching from the bottom-up. First, I get the students intrigued by suggesting something that would be fun or useful to build, like a snowman. Second, we do it with Maya, the animation application which is very much an industry standard and what I use as a teaching tool. This second step, especially later in the class as we do more difficult and detailed things, can be very tedious. (Sometimes I do it wrong the first time or two!) Then third, we step back and address the general concept that underlies this fun or useful thing. In the case of our snowman, it’s the basics of creating solids, such as spheres, out of flat polygon shapes. This “geodesic dome” approach is a key component of most modeling and animation applications.