August 3, 2012 1:38 AM
Posted by: MikeLaverick
Drum Roll… I’ve got some thing to announce… But before I do a message of thanks…
First, I would like to thank the people at TechTarget for supporting me during the last two years – our business-to-business affiliation was a great opportunity for me. I learned valuable insights into your world and the industry generally. Without my time spent working with you guys, I wouldn’t have the chance to do half of the things I did in my last two and half years at RTFM Education – such as books, podcasts and my speaking tour of the US in 2011/2012.
Secondly, I would like to thank ALL the people in the community who have supported and helped me during my time running RTFM Education. I simply couldn’t have reach the position I currently enjoy without your support. I would like to think I helped a lot of people getting into virtualization and VMware. It’s my full intention to remain active in the vCommunity – to both contribute, but now even more so be listening to the community and feedback its concerns and priorities. It is my intention to be fully engaged with the vCommunity as I have in the last years. The folks in the vCommunity are after all VMware customers as well. I will be retaining my activity in the community. I certainly don’t intend to disappear into the background. That would hardly be commensurate to being out-there as “evangelist”. So expect to continue to see me writing and attending events across the region – so please do get in touch if you’d like me to speak at your event. The fast and quickest way to do that nowadays is via my twitter ID - @mike_laverick
Finally, and most importantly I would like thank my long-term companion in life – Carmel Edwards. We met within months me starting RTFM Education Ltd and the site, and then later Mike Laverick Ltd. She was there in the early days when my project of being an independent/freelance trainer/consultant/writer/podcaster/speaker was in its first gestation She’s support me when my gut told me in 2003 that this thing called “virtualization” by this company called “VMware” was going to be pretty big. She’s had the patience of saint in listening to me ramble on obsessively about virtualization. I guess the next thing she will have to put up with is my cloud thoughts instead.
As of Monday the 6th August, I am gainfully employed by VMware. I will be taking up a brand new role of “Senior Cloud Infrastructure Evangelist”. Of course, I considered many different options over recent months but the opportunity with VMware seemed the best role with the best company. It has helped that I got so well with the team over in VMware who have known from the very early days. My new colleagues are a great bunch of people, and I know we going to have such a blast bouncing ideas around with each other. I’m so pleased that VMware has put their faith in me in this way. In many ways it feels like the reset button has pressed and I’m back in 2003. For the first year I will be based in the UK. After a year I will be based in sunny California, Palo Alto…
The move was prompted by many factors. The main one being although I’ve enjoyed my 10 years of independence, I was beginning to feel I had reached the max of my potential as individual working in a bubble on his own. I’d kind of hit a personal “glass ceiling”. Also there was decided feeling inside of me that the next 5 years was going to be “more of the same”. That didn’t inspire me with passion. I’ve never worked on the “vendor” side before – I’ve never been a email@example.com. To date I’ve always been on the opposite side of the wall. And I felt I owed it my career and personal development to be on the other side of the wall for change. I know there is soo0000o much to learn in the coming years, and I will be lapping up every experience!
As a consequence of my new role and changed career direction my direct association with “RTFM Education” the site ends today…
August 1, 2012 2:35 PM
Posted by: MikeLaverick
This week I hoped to kick back, relax and put the final finishing touches to our “Building VMware End-User Solutions” book – which I’m co-authoring with Barry Coombs. Sadly, the experience has been marred by whole series of file problems. I’m going to be honest with you – and say I haven’t really definitely worked out WHAT is causing the problem or WHY. Personally, I suspect a fault in the file system, that could be linked to underlining disk problem.
Things were going swimmingly on Monday when I got back from the US. I started assembling our individual chapters into a master file. Most of this stuff is “ancillary” in nature – tidying up chapter numberings, adding table of contents, and index. By Monday evening I thought I would be ready to upload the master file (some 30MB) to the DropBox. Then the upload mysteriously failed. Barry emailed me to say some of the images were missing. He was right.
It wasn’t a disaster I still had the original – as well as the original chapters. Perhaps the upload had failed mid-transfer leaving .tmp file over.
On Tuesday morning I started working with the file locally. After an hour I got these error message:
Rather foolishly I regarded these as “Word” problems. Fortunately every time this happened (and it happened a lot) I was able to cut & paste the contents of the file to clean Word document and save it elsewhere. Tearing my hair out I must have ended up with about 10 of these files – each with day/time in the filename so I could keep track of them. Usually, this error would cause Word to crash, and use 100% of the CPU. I would have to terminate word to get it to be responsive again. On the downside I was nearly always working with a “recovered file” with NO idea of what had been lost since my last save. That lead me to saving every 60 seconds.
Then I realised I’d been paying more attention to the dialog box logo, and not reading the message. “Disk Error” “Try formatting another disk” “Save the document to another disk”. Cripes. Could it be the disk/file system?
So I ran the Apple Disk Utility. Sure enough the writing was on the wall.
I did another TimeMachine backup to CYA. Reboot the Mac whilst holding down CMD+R, which forces AppleMac to reboot into a recovery style mode – and ask the Disk Utility to repair the disk. That was successful.
However, it didn’t work. This morning the inability to save the word file returned. Another rescan of the disk indicate file system errors again.
In the end I transferred the file to memory stick, and decided to work from that. Taking the dialog boxes original advice. In this way I hope to limp through the rest of the week – so I can still make it to the printers on Friday.
Of course, this is the last thing I could possibly want in the final days of finishing up 9 month book project.
My plan is next week to either reformat the disk and have clean install (or use the TimeMachine - won’t that just bring back a screwed file system?) or play it dumb and take the MBP back to the shop where I bought it and claim a replacement under warranty. To be honest this is my second MBP, and those of you who follow my twitter steam will know I’ve not been impressed. The fan goes like billy-o a LOT more than the previous model I had. There’s been this suspicious ticking noise - which made me take it back to the shop – worried it was a disk problem. They said they replaced the disk – and now this. I guess you call it Sods Law.
July 27, 2012 8:30 AM
Posted by: MikeLaverick
This year I’m speaking about VMware Site Recovery Manager – only this time I’m not flying-solo. I’m backed up fellow caped-crusade of Mountain States Networking – Jeff Drury. I was in Salt Lake City and Portland last year talking to Jeff’s customers about SRM. The title of session is “SRM: Where Theory Meets Practise”. I’m theory guy – given that I know nothing about the real world (sic), and Jeff is supplying the in-the-field experience. That either maps best practise to theory or vice-versa as the case may be.
There’s two session slot times:
Tuesday, Aug 28, 12:30 PM – 1:30 PM
Wednesday, Aug 29, 2:30 PM – 3:30 PM
And the session is called “INF-BCO1757 – VMware SRM – Where Theory Meets Practice”
..and no despite my part-time occupation I WILL NOT be presenting as Elvis. Remember what happens in Vegas, Stays in Vegas…
Thank ya verrrry much….
July 24, 2012 2:33 PM
Posted by: MikeLaverick
This time around I decide to link my blog to GMAIL, and ditch POP for good in preference for IMAP. With Outlook for the Mac 2011 being so new, there was little info around even on googles site – and I was assisted by competing and contradictory TCP port numbers for IMAP and SMTP. However, I was able to use this post as a starting point:
One day I’m looking forward to no email except nice ones asking me how I am and whether I want to get together for email. I believe that day is called “retirement”.
Anyway, start by turning off POP support on the properties of your GMAIL account, and ensure you have IMAP enabled. I’m new user of GMAIL, and I found that this was already the case in a new account.
The next stage is add your account settings into Outlook on the Mac. You will find this under Tools and Accounts. The dialog box has + icon that allows you to add additional email providers. The setting needed are pretty obvious – erm, once you know them. As ever with email clients you a tick box away from getting a cryptic error.
Where I went wrong initially was enabling the option to “Always use secure password”. When I had that engaged I got an error saying that Outlook couldn’t authenticate. The other thing you will notice is that I have the SMTP on 587, whereas the macstories.net has it on 465. How did I get this number? Well, on my iPhone I remembered seeing the option to add a GMAIL account – and when I was having problems I set up GMAIL on the phone first (which worked first time by the way!) so navigating around the settings I thought I would go for its port number. Sadly, there isn’t any IMAP info the phone that I can see which might have been helpful.
The next step was to hit the “More Options” button in Outlook to proved the Authentication for SMTP. I’m still not sure if this is required or even done correctly – but heck it seemed to work for me!
Your done! Once you hit the Send/Receive button or reload Outlook 2011 – you will see the folders that make up your GMAIL account like so:
You’ll see that by default Outlook creates a brand-new “node” in the hierarchy. I actually quite like this separatation to be honest. But the www.macstories.net does show how you can “collapse these into a single view”. For the moment I’m keeping them separate.
July 22, 2012 1:12 AM
Posted by: MikeLaverick
I’ve been busy the last couple of weeks producing content for VMware SMB. We’ve got series of articles planned, as well some podcasts too. The first one of these was a QA session I did, which outlined how I got into IT and stuff like that.
Given my interest in all things available and recoverable I also wrote about “Virtualized Availability” with a nod to specifically to vDR and SRM.
Now, I know what you going to say. Availability & DR are two entirely different beasts. And I have fought a mainly losing battle to keep these two areas distinct from each other. Customers have always conflated one (availability) with the other (DR). Right now, I see clear blue water between the two challenges. But as time goes on I think we are going to increasingly talk less of DR, and more about “Site Availability” especially as some of use get out hands on stretched-technologies that allow for VMware HA to work across two sites… For the moment I think we have delicate line to tread. The line between talking about whats possible NOW, and will be probable in the FUTURE.
Anyway, stayed tuned for a series of 20min podcasts with folks in the VMware SMB Community with companion “thought” pieces around them. A kind of SMBwag if you like. ;-)
July 12, 2012 1:11 AM
Posted by: MikeLaverick
This week I got a very odd and cryptic message from a stranger. It took some research to piece together who this “person” was. He wanted to write a guest blog post on my site – and that’s something I usually welcome. But generally I want to know these folks better than I do this guy. However, in the interest of free speech and stimulating debate I thought I would give him airtime.. Anyway this guy is called “UI_MAN“. From what I understand on twitter he wanted to be GUI_MAN, but that ID was actually taken. Stories abound and they are often contradictory and urban myths abound a plenty. He comes across as quite a bitter and twisted, and appears to have a long standing grudge against PowerCLIman. He’s been rumored to say some very derogatory things about PowerCLIman, such as: “… you call yourself a superhero. But superhero’s do not have West Country accents, and sound like someone from the Wurzels“.Some say… He was the child of two factory workers in a PC manufacturing site – and that their DNA was corrupted by chemicals used to manipulate plastics. Other says he was involved in hideous fire in the factory. He remains masked because bits of keyboard and mouse melted into his body, and became incorporated into his flesh. On his twitter page he states:
“I was born in plastic with two buttons on my forehead. My nemesis is PowerCLI man who believes the world is black and white, and should resemble DOS.”
All we do know his called “Gooeyman”. I’ve asked him to keep it technical, and to try and put his long standing grudge to PowerCLIman to one side. For the moment…
“Hello there my fellow virtualizationist. I come today with an important message about the hidden evils of PowerCLI. For many years system admins have struggled with the graphical users interface of vCenter and the vSphere Client. So far little has been done to address the pernicious short-comings of the GUI. My job as Gooeyman is draw attention to these failings in the hope that VMware will see the errors of her ways, and rectify them. However, at the moment my epic struggles have been hampered by the rise and rise of the forces of darkness and whiteness. By which I mean of course, PowerCLI.
Throughout the millennium the forces of darkness & whiteness have dress themselves up in garb of heroic superheros such as PowerCLIman. Often these forces have done so in order to dupe the foolish and gullible into thinking that future resides in a monochrome world of black backgrounds and white text. For many this reduction of the world of the system admin appeals to their binary natures. I’m here today to tell you that this need not be the case. You can join me in the struggle to bring back the forces of color and simplicity. It is my firm belief that the almost virus like success of PowerCLI is caused in part by lack improvements in the graphical user interface that most admins use on a daily basis.
The limitation of the vSphere/vCenter environment are plain to see. It’s major weakness seems to be its almost total lack of “bulk administration” features. In a simple words in most case it is impossible to apply a setting at a datacenter or cluster level and apply to all ESX host or VMs. Of course, VMware’s response to this would be that “Distributed vSwitches” and “Host Profiles” address this need for automation and consistency. But as they do so they will neglect to say two things – firstly they have placed the sweeties in the jar on the highest shelf, so that most VMware Admins cannot reach them – to use DvSwitches and Host Profiles you need be an Enterprize+ customer. Secondly, Host Profiles whilst offering a rich set of configuration options, requires the ESX host to be in maintenance mode. That makes host profiles fine for the configuration of a new server – but little use for an existing server. Making a simple task such as adding or updating the NTP setting on the host a convoluted and labor some task.
Step forward PowerCLI. In contrast it is capable of such bulk activities and it is free. It popularity has been driven by freely available scripts available from a variety of bloggers such as VMware’s very own Alan Renouf. You might think this a great thing. But my fellow virtualizationist it is not – sadly PowerCLI and its advocates have been duped and deceived by the likes of PowerCLIman. Now when any limitation is found the gooey-world VMware points to PowerCLI as the solution. Rather than improving the core graphical product. We are supposed to be entering a brave new world of the cloud, but still system admins are being forced to open command-line window to get even the simplest of tasks completed – merely because of limitations in the vSphere Client/vCenter. Why isn’t there a reporting and diagnostic feature inside the core product? Why should the humble admin be forced to learn yet another tortuous scripting language to manage the systems they own? Why is it in 2012 still acceptable to open window on your desktop that resembles DOS 5.1 in 1992? The vast majority of VMware customers are Windows people. These people want to click with a mouse and be able to visualize their virtual world. These are the questions that sadly no-one but me – Gooeyman - is asking…
The other fiction propagated by PowerCLIman and his black & white ilk is that everything you can do in the vSphere Client you can do with PowerCLI. This a lie that few in the PowersHell community will openly admit to. But if you bump into a PowerCLIman follower (I call them cmdlets, a bit like piglets), and ply with them drink, they will soon start telling you telling of various settings they cannot control. If you follow me on twitter I have had a number of challenges to PowerCLIman. Interestingly, PowerCLIman has been eerily silent on these. Why? Because he knows that gooeyman is stronger than him, and knows his weaknesses. Gooeyman knows the limitations of PowerCLI and will endeavor to draw peoples attention to them.
PowerCLI followers will also tell you of their sojourns into the deep, dark, unfathomable recesses of the “SDK”. This is unearthly inferno is clearly inspired by Dante’s “Divine Comedy” – consisting of many circles of undocumented settings. Some PowerCLI fanatics are able to navigate there was around is caverns, but the vast majority baulk at its gates. But that doesn’t stop them saying “Well, yes there isn’t a cmdlet for that – but you could do it by accessing the SDK directly”. Knowing they themselves fear its entry…
Today I ask you today – my fellow virtualizatists – to join me in the struggle against the darkness and whiteness of PowerCLI. To lift up arms in the eternal battle against wonks and geeks who bore you silly with their cryptic code and slippery syntax. The very same individuals who write reams and reams of indecipherable and undocumented code – only to leave the businesses they work for. Leaving some poor replacement admin at lost to understand their rhymes. Join me today in this struggle against the script monkeys – together we can work towards a world color where all you have to do is click “Next”.
July 11, 2012 8:12 AM
Posted by: MikeLaverick
In part one of the series I talked about what its like to be an author, and in part two I talked about what is like to work with a publisher – in this final part I want to talk about what I think the future is for publishing generally.
The Printed Book is Technology
With the rise and rise of digital ebooks there has been some hasty attempts to suggest that the era of the physical paperbook is over. I’m somewhat skeptical about this assertion in the same way that I think its hasty to announce the Post-PC era. It’s funny how we don’t don’t see printed books as “technology” because its isn’t bright and shinny with lots of buttons and an on/off switch.
But if you think about it the printed book is a technology that has been around for hundreds of years ever since Guttenberg made mass-production of books possible. Guttenberg‘s invention (movable type) made stopped books being writing by Clerical Scribes for those with the money (generally the aristocracy) – and made the printed word accessible to even people on modest incomes. Guttenberg‘s invention drove mass-literacy and you could say we wouldn’t live in our democratic societies without his work. If you think about the technology my generation grew up with most of it now resides behind glass cabinets in Science Museums. For me the printed book is up there with the wheel and the combustion engine. Pronounce is demise with extreme caution. This year whilst on holiday in the Canary Islands I read two paper backs – Chuck Palahniuk’s “Fight Club” and Maya Angelou’s “I know why the caged bird sings“. In the case of the Maya Angelou book I picked ir up for £1 in Oxfam, and it had been printed in 1979. It’s hard to think of any technology used created in 2012 that is still useable in 2043, when I will be 73 years old.
My point here is that printed book has many virtues. It hasn’t changed substantively in decades and has an extremely long shelf-life. There will never be a Betamax book which is rendered useless because no-one supports that format – or doesn’t understand how the turn the page. Printed books are relatively inexpensive items such that I would considered leaving the books to the hotels library (my girlfriend left her books with them…) but I would not dream of leaving my iPAD or my girlfriends Kindle behind in our hotel room when we left. Plus if someone comes round my house and wants to borrow a copy of “Fight Club” I can hand it over to them. Whether I will ever see it again is another matter – in fact my brother lent me the copy some 6 years ago, and I’ve only gotten round to reading this year! My point here is the cost of the printed book (especially 2nd hand) has plummeted to the point that they become almost disposable (unless they are precious 1st Editions of Austen’s “Pride and Prejudice”). It’s also perhaps worth mentioning that some schools and colleges bulk buy copies of learned tomes – and lend them students throughout the academic year. That can be extremely efficient way of passing on information – when the physical book might pass through the hands of many students before the thing is so battered and degraded that it has to be replaced. It’s hard to see that happening with digital media – where ever copy of the ebook is licensed to the user with a device, which is not transferable to another. In short, it hard to see the concept of a lending library working under the current T&C’s imposed by the vendors… As you can tell with a BA and MA in English and American literature I love my books.
and this is big a but. I think printed books for technical and textbook purposes are dead in the water. Here’s why. I think the way people read technical/textbooks its totally different from the way I read “Fight Club”. I read “Flight Club” cover-to-cover. I’d be very surprised to see if anyone read any of my books from page one to the end [If you have then I congratulate you - but I think you are in the minority]. The reality is people dip in to areas where they think they need to learn, and skip parts with which they are familiar. They may come back to topic 6 months later – because their project is ready to adopt a certain technology. They may have learned everything there is to know about DRS, but that was 12 months ago and they need a refresh. Technical readers look for answer to specific problem will not use a “Table of Contents” or an “Index” – they are more likely to adopt the search features of their chosen ebook reader instead. Now, before I move on I would add a caveat. There are some books that continue to work in the print media and have had a very long shelf life such Bjarne Stroustroup’s Programming C++, and Knuth’s “Art of Computer Programming”. But I think it would be fair to say that these are exception, rather than rule. There’s no reason why digital publishing should make print-ready publishing DOA next week, but I personally think there will be a steady decline over a number of years – as we have seen the sales of tapes, records, and CDs.
How digital publishing will change the industry:
There will be a lot of debate about this move – as folks like Amazon and Apple fall over themselves to sign-up big name authors – bypassing the publishing houses altogether. It will be the end of an era. For the moments Amazon is choosing to partner with the publishers – but who knows how long that unsteady and somewhat prickly relations might survive. Apples stance is to partner on somethings when it suits them, and not when it doesn’t. Their iAuthor program has a long way to go. Very proprietary and no author control over distribution or pricing.
We might be tempted to feel sorry for the publishers. But personally I don’t have huge amount of sympathy for them. For decades they allowed the likes Amazon (the retailer) to dominate the market place such that they can command gouging discounts to publishers (the manufacturer). As an author and part-time entrepreneur I always felt the publishers didn’t do enough to protect their position or react quickly enough to the rise of online book sales – and again to the rise of ebook readers. That allowed some – and Amazon in particular to create dysfunctional and dominant position in the industry. At the opposite end of the scale we have seen in the last decade the rise of self-publishing (to the degree that many conventional publishers have plans to create their own self-publishing portals). So I see a very divided world developing. The top authors get direct deals with the likes of Amazon, and the rest of the authors will rely on self-publishing, and self-promotion – in the hope they will be picked up by an Amazon. As you can see the publishers are being squeezed in both directions – from the power of Amazon and by the adoption of self-publishing by authors who see it as way of circumventing the rules of the past. At the moment the big publishing houses like Pearson and McGraw-Hill are competing, and are not feeling too squeezed by new pardigim. The are innovating and reacting the new atmosphere of competition, and its by no means a forgone conclusion that Amazon and Apple with have all cakes and eat them.
For me the ability to write in a native ebook format offers a chance not to have to comprise on the look and feel of the final product. Despite the efforts of the publishers – there’s still a palpable gap between what the author intended and the final printed page. That’s because the printing process necessitates certain compromises that aren’t present in ebooks. No doubt there will be different compromises with ebooks, but they might be more palatable the ones currently on offer.
Nothing new under the sun – The Music Recording Industry:
In many respect the changes that are about to happen mirror exactly the changes that took place in the music recording industry more than 15 years ago. The rise and rise of the MP3 and MP3 player has lead to a collapse in conventional CD-ROM sales. That hasn’t meant the end of music. In the same way the decline of conventional print-media doesn’t mean the concept of the “book” or “reading” or “writing” is in decline. When you read a book on ebook reader, your still reading a book after all. And I’m personally convinced that the decline of the printed-book need not necessitate a decline in overall literacy. What it does mean is now, more than ever – an author must work to “build their brand”, and have ready audience who recognises that brand as symbolizing quality.
In the music industry there’s been so much rapid change – that most recording music artists now make more money touring than selling their recorded work. I believe the same revolution is coming to areas of publishing such that we may well see the end of some long established publishing houses – in the same way we have seen the decline of many famous recording companies in recent years. I can see how the process of writing book will be a lot like making music now. Authors will be like artists – who produce, self-publish and self-promote their work. Think of a “MySpace” but for budding authors. The hope will be once they have a significant following or “go viral” then they will picked up by major distributor. That author will have complete copyright control, and will not be selling their rights out to a record company. You see the music recording industry has unenviable reputation for being the biggest bunch of cut-throat merchants when it comes to recording contracts with artists. That’s why so many modern artist – setup their own publishing, recording company as soon as they have the finances to do so. Similarly, the likes of Amazon should expect authors not to roll-over to have their tummies tickled. I’ve level criticism at the distributors (Amazon) and the manufactures (the publishers), but its also up to us as authors to strike out and show that as the “creator-owners” of our own works – the retailers and publishers would have nothing to sell – if we chose not to deal with them.
Books designed for ebook readers from day one:
In my vision I would like to see digital publishing model that supported both online and offline use. Such a situation already exists to supply content to institutes of Higher Education. I don’t see why this model couldn’t be applied to the world of commercial text book publishing. You shouldn’t don’t need to be connected to the web to read the content, but it is constantly updated and revised for as long as you a subscribe – or you would be allowed to buy the ebook for one-off fee without updates. A bit like the SnS model that has taken off in recent years for software sales. I’d like to see ebooks really take advantage of all the multimedia capabilities such as audio, video and animation. Right now, what happens is book is design for paper first, and then ported to digital formats afterwards. What I would like to see is move to authoring directly in a digital format from the get-go. For me its like the difference between doing a P2V of existing server, or starting with a brand new clean virtual machine. The trouble is at the moment is two fold – firstly ebook versions of text book have yet to reach the tipping point of being the defacto format that folks choose. Additionally, the current generation of ebook readers simply don’t have the capacity to take the volume of data created in the ebook format that I’m imagining. The only viable way to store them current is an online model only. That might not be convient for those people who are on a long-haul flight, or in datacenter that prohibits the use of devices connected directly to the Internet. Finally, of course there is proliferation of different formats and readers – that creates an additional layer of expenses and costs to be absorbed into the purchase price – and still make a profit.
I don’t see ebooks as the end of books. It’s just taking what used be printed on paper and placing that on-screen after all. When you read an ebook, its not as if you have become an illiterate slob as most of the arty-farty literati would have you believe. For me I see at as an extension of something I’ve done for while – can be the self-author, self-publisher, self-promoter and self-distributor of my content. It also offers some tantalizing improvements on the printed page.
For example people always complain about typos or technical errors – the ability to make those corrections immediately, and have them updated on every subscribing device is very attractive to someone like me who strives to be as technical accurate as possible (but also struggles to create typo-free content, as this blogpost undoubtably testifies too!]. It also offers the opportunity to have what I call “just in time publishing” where content can be delivered on much narrower time constraints. So I could have a beta copy of the book out on the day of the GA, with second update once I’ve tested my content against the GA version of the product – I’d be make on-going edits and updates during the life time of the technology.
The rise of the iPAD and the Kindle Fire means we can now offer full-colour on screen grabs and diagrams. At the moment colour printing is normally cost prohibitive in print-media. You might be surprised to know by how much. I looked into it once. A colour version of my SRM 1.0 book would have jumped in product costs from $14 to $114. Admittedly, this price jump reflects the costs preloaded into self-publishing websites. Conventional publishers have such economies of scale – that make color printing much more cost-affective.
So for me we have yet to see a real ebook as it should be – with embedded author narration (a feature you normally would have to pay separately for) and on-screen demos by the author. In short I think the time has come adopt a “ebook first policy” much in the same way as we adopted a “virtualization first policy” to creation of new servers.
In part one I wrote about how significant writing during the beta program has become. I think there’s an important outcome here that comes from writing during a beta program. It’s a big one. It’s likely that because of print-media publishing deadlines that the 1st edition of any technical book is likely to be NOT tested or verified against a GA. For this reason readers should expect that not all the screen grabs or even the step-by-step instructions covered will not “map” directly to the product version they are using. This is not unusual. Its common in technical literature for a book to be written on 5.0 of a product only for a 5.1 version of the product to be released within 12 months. As we all know – even a 5.1 version a product is meant to be a “maintenance release” differences can and do creep in.
For this reason I think its inevitable that technical books will have to be continually updated within the lifetime of the product release. This represents a major change to the way authors currently write and work on books. At the moment most authors will take somewhere between 6-12 months to write a book. Once they submit their first draft to the publisher, the “windows of opportunity” to make changes, improvements and enhancements gets narrower and narrower until the point is reached where the author is only allowed to make typographical corrections. At that point the author puts down his pen (to use an analog metaphor) until such time as the ISV releases a major update…
I think this model is way past due (to use a library reference or rent referrence). Instead I think we will have to move to a model where writing is continuous process. The author is continually making improvements and changes within the life time of the book. Increasingly, the readers will expect to be notified of these changes, and receive automatic updates to their ebook devices. In some respect the “book” comes to mirror the lifecycle of software. When customers buy a software product, they expect software bug fixes and patches – in some case changes in product functionality in order that their purchase remains fit-for-purpose. Customers of technical books will soon (if they don’t already) have the same expectation. As for me I would welcome such a change. It would mean if spotted an error, bug, typo or area for improvement – I could load up the ebook authoring software and make correction. This would be track-changed – and then approved by my proof-reader. This sounds like a rosy view of the future that lacks challenges. There has been attempts in the past at “Evergreen” publishing, where authors have promised to maintain and update content – sadly those authors rarely stay as committed as they should do. So for this model to work, authors such as myself would need “incentivizing” in manner that is very different from the way it is done now.
Of course this change is not without downsides. Often the end of one book project means the author is free to start another. I find it funny to talk of continuously writing – because I feel like haven’t stopped writing since 2003! There’s a generally a 6 weeks period between me ending one book and starting another. Usually the ending process has me saying “that’s it – that’s the last book I’m writing”… 6 weeks later my girlfriend spots me opening a file called “Chapter 2:”, and I have to make some sheepish back-peddling excuse that I’m writing another book.
Bumps in the Road:
With that said I do foresee potential bumps in the road. As ever there will be a battle for dominant ebook formats, and the owns of distribution channels (Amazon and iTunes) are likely to want to lock the content producers into writing the content in format that can only be distributed through them, on their devices (Kindle and iPAD). So whilst I think its time for authors like myself to start creating ebooks natively in ebook authoring software – and to abandon Word and conversion tools – we need authoring tools that will allow us to freely export the content in the formats that supported across many device types. That’s why I’m attracted to Project Gutenburg that aims to promote ebooks free of royalties and free of vendor lock in to propriety formats.
It will also make life difficult for schools and colleges – where once they could by 30 copies of a printed book – and use it year in and year out – the ebook licensing model assume the book is bought by every student who owns their own ebook reader. A situation which clearly doesn’t exist (because of the cost ebook readers) and would need fine tuning to make the costs reasonable. It seems obivious to me that we shouldn’t limit ebook content to the ebook devices themselves – folks should be able to read their ebooks on any device they like – and that includes devices as PCs, laptops and notebooks. Only by recognising all the device options can available can “The Book” continued to be consumed by as many people as possible. It’s part of the BOYD ethos I guess…
At the moment profit margins on technical books are low – and their shelf-life is also low. This makes them a relatively poor investment for publishers when compared to other content they could choose to publish. I don’t expect this to change overnight, but ideally if the cost of production can be lowered significantly – this might result in better margins for all concerned.
July 10, 2012 7:51 AM
Posted by: MikeLaverick
In my previous part in this series - I talked about what its like to be an author – in this part I want to focus more on the process of writing for a publisher. So if you have decided your going to write and you would like to go through the conventional publishing route. Here’s some tips and tricks for any aspiring writer going down that particular path.
There are number of things you can do before you approach publisher that can make you look more professional and credible. The more professional and credible you are the more the publisher is likely to see you as viable author. It also helps improve your status with them when it comes to negotiating a deal. So have a good idea on a topic or technology that’s not been previously published. There are loads of technologies from VMware that no-one has staked out as their territory. These represent virgin opportunities. Pick a technology that fills your belly with fire, and try to make yourself the leading independent go-to-guy for it [Do spot a pattern there?]
Always write to the newest software release, as I stated earlier write under the beta programme – assuming you can get on it of course.
Write a “Table of Contents” together with estimate of the total page length (including graphics). The table of contents should be relatively easy to build out – and the publisher will expect 2-3 levels to it to show you know what you doing. See the publisher as big Hollywood production house, and your a director with your screenplay and storyboard. Just like the guys in Hollywood they need to see that you have clear idea of where you are going… If you not sure how the ToC should look and no idea of word length – pop-down to your local bookstore (sic) and check out their computer section (sic), and use an existing book of the size, form-factor and length that you think your topic merits. A big-thick weighty tome like my vSphere4 book or the PowerCLI reference book runs to about 650-750 pages (of text & images) whereas my first SRM book (back when I had no storage vendors covered, and it was a 1.0 book) came in at about 300 pages.
If your book is going have LOTS of images and screen grabs – warn your publisher upfront. My work tends to have lots of screen grabs, its the way I work. Many publishers assume technical books will have just 10-20 screen grabs. They might be in for shock when you book has 50 screen grabs per chapter. Bear in mind all these images need managing, manipulating, captioning and processing – so it could have a massive impact on the publishing schedule.
Get the publishers Author’s Kit:
Once you have agreed terms – get hold of their standard authors template as soon as you can – most publishers provide a “authors kit” which includes a Word template. If you intend to start writing before you have signed a contract just use a bog standard template, and use it consistently. When you come take your template and match it to the publishers standard it will be much easier process to get your content to match their style. With that said you will be surprised at how ignorant publishers are about basic features of Wordprocessor. I once had to explain what a “style” was in Word – something I thought I’d stopped doing back in 1994.
Images can be a PITA:
Whilst most publishers don’t mind you embedding screen grabs and diagrams in the Word document – these will be eventually be removed from the manuscript as its converted from Word to some sort of DTP system such as Quark. In the extreme case the literally copy you beautiful formatted document into Notepad (thus removing all the formatting) and then paste it into Quark – putting the formatting back. Incidentally, this formatting will be to their standards, not yours. So I would keep you bold & italics to a minimum because they might not survive the final imprint anyway. Tough!
So make sure you keep every image as a separate file (ideally .TIFF without lossy) with a numbering system like “Figure 1.1″. When you write your book – the publisher will expect captions about max of about 3 lines for each and every image. They will also expect you add leading text such as “…as can be seen in Figure 1.1″. This is because where your image is in the manuscript might be on the opposite page in the final print, or even over the page if the text falls on 341, and image is on page 342. In my experience images are the biggest PITA. That’s because no publisher seems to have devised a method – to allow you to easily add new images – and reset all the numbering in the text and all the filename held in the .TIFF format. It can get so bad that you start to avoid adding new images after the first draft because of the volume of work it entails. In fact in some case I’ve hired my step-daughter or step-son (who were students at the time, and therefore easy to exploit) to do the hard graft for me!
How many co-authors should you have?
There is a quick answer to this. Just you! Seriously, writing on your own gives you the ultimate freedom to express your content as you see fit – including structure and order. It also means continuity of style from one chapter to another. It means you can work towards a book that make sense when read end-to-end (although its unlikely that anyone reads a technical book from end-to-end). There downside of working on your own, is that all the weight and pressure is on your shoulders, and your shoulders alone – with no-one else to share the burden with.
I’ve written twice with co-authors – with the Vi3 book with Ron Oglesby and Scott Herold – and then more recently the VMware View 5.1 book with Barry Coombs. I must admit I enjoyed the experience both times. Even though in both cases – we merely cut the book in half and wrote independently – it did feel better to have another person on the project. Writing on your own can be a lonely process. Psychologically working with a co-author makes you feel as if the workload has been halved – even though you have do 100% of you own writing. I know for sure I wouldn’t have released the VMware View 5.1 book without Barry’s assistance. I was just so busy with the SRM book last year, the idea of starting yet another book in the same year on my own was more than I could bear. With the re-write of the VMware View 5.1 book I opted for change from the first-person (I and My) to the third-person (We and Our). That did mean looking at again at places I where I had expressed very strong personal opinions – and either removing them, or couching them in less strident English! When writing with with co-author I feel its important you take a “collective responsibility” for the content – so its important to discuss areas of contention (normally what’s considered a best practice) and iron these out early.
Where I have seen it go wrong is where there’s been too many authors. It’s the old case of “too many cooks spoil the broth”. Where it really shows itself is where you have two writers with massively different styles of writing. So you go from from “Well, folks lets see what happens when you click the bad ole OK button”, to what sounds like a University Professor talking about an obscure areas of quantum physics. It’s a bit of jolt to the reader. It’s worth bearing mind that the royalty on any books sold would normally be split between the co-authors. More authors doesn’t mean more money – its means the money in the pot being increasingly split. But of course, your not in it for the money are you???
In the main I would say the “Rule of 1, 2, 3″ applies here. Four authors is too much, and it complicates matters when dealing with the proof-reader and reviewers.
Finally. If you are co-authoring pick your buddy carefully. Many a book has faltered with 3 authors, where 2 authors make the deadlines, and the 3rd bails halfway through leaving your book project heavily delayed, and the project looking doubtful.
How Advances and Royalties work…
As far as I can tell few new authors understand how the business of print-media publishing works – and even few readers do. I’ve lost track of the number of folks who have asked me if I’ve bought a new Porsche with my royalties. If only they knew the realities.
Firstly, very few technical writers receive an advance. I have twice, but as far as I know its actually quite rare. Lets be 100% clear on what an advance is and how they work. It’s essentially payment upfront of the royalties. This means whatever the cheque states – you will have to “earn” back it in book sales. So if the advance is $10,000, and your royalty per book is $10 then you would have to sell 1,000 copies of you book – BEFORE you earn any more royalties. Advances are NOT free money that publishers pay to reserve your services or because they like you.
Why are advances so important to authors? Well, the process of normal payment of royalties is somewhat convoluted, and such it can be quite sometime until you as the author see any wonga. Here’s why the publisher (the manufacturer) has weird relationship with the books (online retailers like Amazon, or physical stores like Barnes & Noble). In this relationship the publisher provides N copies of your book (usually at some massive discount to Amazon). The retailer has the right to return any copies they couldn’t sell or that returned by the customer. Personally, I think this is very odd setup. For example if ran a store full of sweeties and newspapers, which I later found I didn’t sell – I would see that as my problem. I wouldn’t go back to the wholesalers or cash&carry – and say, I’m sorry I couldn’t sell these boxes of Snickers – can I have my money back?
This is the reason your royalties don’t come through very quickly. The publisher has to minus the amount supplied from the amount not sold or returned – to work out the actual number sold. In this relationship the publisher subsides and reduces the risk to the retailer. This is probably neither fair or equitable, but that is the nature of the business. But hey, your an author - remember your plankton, and Amazon is the killer whale…
For this reason advanced are excellent for the author – because its away of bypassing this dysfunctional relationship between publisher and retailer. It so dysfunctional you’d think the publishers would have thought of way of becoming their own retailers. They didn’t. The retailers became publishers instead using the ebook as the cost affective way of cutting the publisher out of the loop. Now in fairness I am bashing the publishers here a bit too much. Many publishers do have their online retail outlets such as Pearson and the VMware Press. BUT… when compared to the market domination of Amazon, there attempts do seem somewhat small-scale. Sadly, Amazon has captured the mindset of the customers – need book fast = Amazon.
In my experience the BEST deals don’t come from publishers. They come from software or hardware vendor who’s technology your writing about. I’m thinking of people like VMware Press, Cisco Press and Microsoft Press. Often this vendor backed “presses” are merely relationships with mainstream publishers like Pearson or McGraw-Hill. But your project will be back by a vendor who has deep pockets – and will often give you much better technical support than a publisher on their own could offer. Also it changes the relationship where the vendor is customer of the publisher. If you have a very good relationship with vendor – that can spill over to the way they handle you. They are much nicer to you. I think some time publishers are guity of seeing authors as a hired hack – your just-another-author to them. However, if their client (the vendor) indicates a strong preference for you to write a book then this can do much enhance your status in the publisher eyes. You are now pivotal to the project.
July 9, 2012 10:25 AM
Posted by: MikeLaverick
I’m often asked by people about what it like to be author of books. Whilst I was on holiday recently I started to think about my experiences, and the way technology is changing
and how that will influence the way might write in the future. I became an author by accident really – from writing free guides around ESX2/vCenter1 that evolved into writing a book with Ron Oglesby and Scott Herold about Vi3. It also happened because I had the time to do either as a freelance instructor or later as full-time writer for TechTarget and on my RTFM Blog. In later years as I started to 2nd, 3rd and 4th editions of the work I increasingly opted to donate my royalties to charity. That’s because I’ve never seen my books as way to make a living, and the saw money raised to do more for a good cause, than it could sitting in my bank account.
It’s all about time…
So in my experience a lot of people overly focus on the technical side of writing when they think about writing a book. There’s a lot of anxiety here about whether the author has the technical abilities to write. In my experience this isn’t normally the challenge. People generally underestimate their technical abilities – especially as we are endlessly surrounded by reminders of how little we know – given the vast vistas of the IT landscape. The biggest limit is TIME. You need time to work on a book, and its difficult to balance the demands of regular day time job and a commitment to write a book, especially if you have publishing deadlines. Think about this way. When your perusing a IT certification. How do you balance the time between work, family and study?
One one piece of advice about this is this – is start writing a book about a technology that is still in a beta format. In fact getting on a beta program should be your first step. Write your book around the beta and look to release it very shortly after the GA if possible. This will extend the shelf-life of your book to the maximum. In the case of VMware their release strategy is now based on a minor 5.1 release in one year, followed by full 6.0 by the next. That means between the GA of a 5.0 product, your book will have shelf-life of just 2 years before it becomes “superseded” by newer release. This makes getting on the beta program when it opens and writing content during the beta program an absolute must. As we will see in later parts I personally believe these increasing narrow time frames from one release to another – signals the inevitable decline of the paper-based, physical book towards an era of digital delivery only, and subscription based models. As former student of American and English Literature, I don’t think that signals the immediate decline of the paper-book. I think this format will stay around for many years, and it will certainly there when I shuffle of this mortal coil. After the “book” hasn’t been a single entity since it was created. Early books were actually manscripts written by scribes usually funded by the church or the King or Queen. They were predominantly not in even in the language of the people – but in the language of the church (Latin). Since Guttenberg we have seen the democratization of the written word – and I see these e-book readers as simply another form-factor that people can consume the written word. Far from being the “death” of book, we could see a re-vitalization of literacy and reading which has seen a hit in recent years…
Is it worth it???
Just like in the world of fiction or literature – the desire to write comes from within. That’s especially important in the world of technical authoring because to be brutally honest – technical authors aren’t in it for the money – and if they are they are sadly deluded. As you might gather the shelf-life of IT book is particular short and the royalties per-book aren’t overwhelming. In my mind the only people who make real money out of writing and publishing books are the Dan Brown’s and J.K Rowling’s of the world who have their works translated into every language and made into a movie. Now, with that said when the royalty cheques do land on the Welcome mat, they are just as welcome – money is money after all. But I believe the real motivation should be that you have uncontainable urge to pass on your pearls of wisdom to those willing to read. You should actually find the process of writing enjoyable – I do. If I didn’t I wouldn’t be writing this blogpost would I? What I’m getting at here is the main reward is the work itself – It is in the actually writing, and the sense of achievement once you have finished your work. One of by product of writing is that does wonders for technical knowledge as well. Something I saw when writing about ESX2/3/4 – my worry is that if I don’t write a book on ESX5 or ESX6 my techknowledgey won’t be as good as it once was.
I would also say that in our competitive world, it hasn’t harmed my career one little bit that I have six books under my belt – four of which have been self-published. Being author has helped differentiate myself from others. When I started blogging about VMware you could count on one-hand the number of people who were doing the same. As the companies star rose that increased the number of other folks doing exactly the same as myself. Writing books for the likes of McGraw-Hill, Pearson or the VMware Press – separates you out from the rest of the crowd. It puts you in a unique group (I won’t say elite because that would be egotistical!), and gives you as an individual, a personal USP. Additionally, I would say writing a book is a bit like doing PhD. The mere fact that you had the personal drive, commitment, and sheer gumption is testament that your something beyond the norm (it means your abnormal). Add on top of that you handle the distribution and promotion of your work – marks you out as a kind of authorial entrepreneur.
Should I go with a publisher or self-publish?
There are advantages and disadvantages to both. Lets talk about the advantages of both approaches first. The great thing about having a publisher is they get you the distribution to many avenues (Amazon, Safari, Barnes & Noble et al). You also get the backing of a professional proof-reading and production process which ensures that your content is as free of typos as is humanly possible. They also handle the arrangement of a technical review of the book, and might even facilitate contacts with the vendor your writing about. Although if you following my advice to the letter, you early access to the beta program probably means you have better hooks into the vendor or ISV than you publisher has. Getting vendor sponsors and buy in as is the case with VMware Press can make a significant difference to your remuneration package. I will say no more. If you working with a publisher who doesn’t have a relationship with the vendor – you should see royalties of $10-15 per book sold. As European I made sure that my royalties were same regardless region. I don’t see why should make less on books sold in Europe, than they are in the US.
The advantages of self-publishing is that your 100% control of the look, feel and content. You also have 100% control over deadlines and so on. You also make 100% of the profit margin as well. It’s the author who decides when the content is ready. That also means you have to design your own cover, acquire an ISBN (without it neither Amazon or a public library will stock your book) and promote it to your peers.
To my mind there are merits in both approaches. I embarked on the self-publishing route mainly because I thought it would be fun and interesting to do. I also was being very practical. I wanted to write about VMware Site Recovery Manager back when the product was only an alpha release. I knew the customer based would be small when compared to say Vi3 or vSphere4. So I knew I would be up against it when it came to trying to convince a publisher to back the project back then. I must say I enjoyed the process greatly, and there was a great sense of achievement when I finally had the book in my hand. Here was something I’d created where I’d been intimately involved in every aspect of the production.
Where I went wrong with self-publishing when I started was pricing. I was trying to see if I could “beat” the publishers at their own game, and make more money by self-publishing. For a moment I lost sight of my original point – don’t do it for the money. I think I over priced the 1st Edition of my SRM 1.0 book at $49.99 – and put off buyers. I also didn’t make the book in a PDF format at reasonable prices either. I moved far more copies of the SRM 4.0 book when the hard-copy was only 29.99 and the digital version was $10.
In my next part in this series - I will be talking in more detail about what its like to work with a publisher, and try to offer budding technical authors some tips and tricks about the process.