Sunday, February 27, 2011

Learning from the Master - Steve Jobs

Steve Jobs, 蘋果總裁.
同時也是世界上最偉大的簡報者之一.






他最有名, 最能撼動人心的就是2005年在史丹佛畢業典禮那場演說.
"求知若飢, 虛心若愚" (Stay hungry, Stay foolish)







iPad發表會.
與其介紹新產品的規格.
觀眾們更想知道的是iPad能為他們做些什麼.
Steve從觀眾的角度出發.
他簡報的特色是大量圖片, 輔以少量文字說明.







之前讀過一本書在介紹他簡報的祕訣.
"大家來看賈伯斯, 向蘋果的表演大師學簡報"




http://www.books.com.tw/exep/prod/booksfile.php?item=0010460757


提到Steve Jobs之所以與眾不同的幾個特點:
-  在開始製作投影片之前, 先準備好你的紙與筆.
   試著用心思考, 擬定腳本, 如何將關鍵訊息傳達給觀眾.
-  減少使用文字, 視覺圖像是最有力的傳達管道.
-  不斷地練習, 練習, 練習. 力求完美.
- ...

簡報是一門藝術, 追求完美是沒有極限的, 
觀摩大師的作品能幫助我們更上一層樓.


------------------------------------------


Steve Jobs, the CEO of Apple.
He is as well as one of the greatest presenters.





The most famous and impressive presentation is Stanford commencement speech in 2005.
"Stay hungry, stay foolish"





iPad announcement.
Instead of understanding the specs of the new product,
People are more willing to know how iPad affect their lives.
Steve present from users' perspectives.
Most of the slides simply show one image.
There is very little text on his slide.





I have read a book to talk about the skills used in Steve Jobs' presentation.
"The Presentation Secrets of Steve Jobs: How to Be Insanely Great in Front of Any Audience"

http://www.amazon.com/Presentation-Secrets-Steve-Jobs-Insanely/dp/0071636080

Steve Jobs's presentation is incredible because:
- Before opening the presentation software, prepare your pen and paper first.
  Start by thinking, sketching, and scripting.
  Think about how you can deliver key messages to the audience.
- Use little text only. Visualization is the most powerful method to convince people.
- Practice, practice, and practice. Make all perfect.
- ...

Presentation is an art.
The road to seek excellence is endless.
Learning from the master can make us better and better.

Thursday, February 24, 2011

Key to Successful Automation III - Quality



測試自動化是為了確保產品的品質.
但當Developer的程式碼有測試程式檢驗時, 誰來驗證測試程式本身是否有bug?
我想應該沒有人敢這樣說 "老闆, 我需要再找一批人來測試自動化測試的程式."

測試程式的品質不好會有甚麼影響? 倘若每次自動化測試的結果並不穩定, 常常有假警報或是沒有抓到真正的問題, 久而久之, 測試工程師對自動測試的報告的信心指數是不夠的. 要嘛就是花很多時間去仔細檢查報告上"可能"有問題的地方, 要嘛就是隨著產品發行日一天一天逼近, 所有測試最後還是靠手動測試來確保品質. 這樣當初測試自動化的目的-節省回歸測試的成本, 不但完全沒達到, 反浪費了許多寶貴的時間, 可說是賠了夫人又折兵.

好, 現在我們知道沒做好的後果有多嚴重了.
那要怎麼提升測試自動化的品質呢?

1. 把測試自動化當專案在開發
測試自動化應有其目標, 策略, 資源, 計畫, 設計. 要像開發專案一樣地嚴謹.
測試開發的時程應與產品配合. 目標方面則是質應重於量: 20個穩定的測試項比200個不穩定的測試項有用得多.
由於一樣是程式開發, 版本控制絕對是必需的, bug追蹤系統最好也有.
此外, 需要有方法追蹤進度, 我們每天早上都會有一個15分鐘的會議 (Daily sync-up meeting) 討論昨天完成的工作, 今天要做的事及遇到的阻礙. 還有WBS (Work breakdown structure) 來紀錄工作的完成度.
2. 程式檢視 (Code Inspection)
所謂程式檢視, 就是由一群人一起審核程式碼, 提出程式碼中有錯誤或是需要改進的地方. 每個人或多或少都有盲點, 所以藉由眾人的智慧來提升程式的品質. 
我們實際的經驗如下:  一次檢視的範圍大約是1000行的程式, 整個測試團隊都要參加 (大約四~六人, 如果人更多可以拆成Feature team), 在正式檢視的會議數天前, 這次被檢視程式的作者會向團隊大概講解一下程式的架構, 幫助大家加速檢視. 在會議前所有成員都應該確實讀完要檢視的程式碼, 並將有問題的地方記錄下來.
檢視的範圍應著重於程式的架構, 邏輯, 或者是可維護性 (Copy-paste, Hard code都應該被修正), 註解不夠的地方也可以提出來加強, 而不用拘泥於一些小錯字. 另外有一樣東西也很值得檢視, 程式中是否有時間差的問題 (Timing issue)? 測試程式有時候在等待使用者介面切換或是產品完成某件事時, 會用睡眠 (Sleep)的方式去等, 但這些時間會隨著機器規格不同而有所變化, 因此使用固定的時間等待將導致自動測試結果不穩定, 時好時壞. 比較好的作法應該用事件 (Event) 或檢查產品的紀錄 (Log) 來判斷是否一件事情已經完成.

檢視的標準可以設定在如果今天這段程式碼要交接給你維護, 你覺得能夠接受.
檢視的會議中, 會有主席及紀錄, 主席會讓與會成員提出觀察到的問題. 但主席也需要維持整個會議進行的流暢, 每個問題的討論不應超過3分鐘, 亦不討論詳細的解法, 只要大家覺得這真的是個問題就可以繼續下一項, 3分鐘討論不完的部分則可以另闢新的會議. 紀錄將所有該討論該解決的地方都記下來, 以便追蹤. 最後主席會讓大家表決, 是要找人追蹤呢? 還是如果品質過差的話, 需要再一次的檢視會議.
整個會議進行的時間不宜過長, 超過2小時與會者注意力就會開始渙散, 效率降低.
程式檢視除了提升測試程式的品質外, 還有帶來一些好處. 首先就是因為知道程式會被別人檢視, 開發中就會比較注意細節及註解, 有達到警惕的效果. 此外閱讀資深工程師的程式可以幫助新手成長, 也能夠瞭解是否有函式可以共用. 如果人事上有異動, 交接上也變得比較容易.
程式檢視會花一些時間, 但相信是值得的. 
3. 保持簡單
保持測試程式, 除錯, 閱讀測試報告簡單.
簡單可以節省例行公事的時間, 專注於更重要的事情上, 也能讓測試自動化的效果發揮到最大. 
測試程式簡單的話, 問題會比較少而且除錯簡單, 程式檢視也較容易. 單元測試 (Unit test) 就是一個很典型的例子. 一般來說, 單執行緒 (Single-thread) 的程式也會比多執行緒 (Multi-thread) 的程式容易除錯.
除錯簡單可以從幾個地方來觀察: 測試架構是否能單步除錯? 是否有收集產品除錯資訊(Debug log)?測試程式本身的除錯資訊寫得完整嗎? 是否有紀錄當時測試環境的資訊? 像是螢幕的截圖, 甚至如果有足夠的硬碟空間, 自動測試發生錯誤時可以記錄一個虛擬機器的還原點. 收集夠多的資訊, 除錯將會更容易. 
閱讀測試報告簡單是說, 當測試報告出來時, 能否能很容易地知道目前產品的品質? 對測試的結果自動產生一些分析, 能提升閱讀以及除錯的速度. 像是如果有10台測試機器, 那除了每台機器有自己的報告外, 也應該有一份完整的分析報告可以看到所有機器的狀態. 另外, 如果有些測試程式剛開發完成, 是不是能跟目前穩定的測試程式有所區隔? 之前同事有一個建議我覺得很不錯, 自動化測試的報告上可以分成"穩定/開發中" (Stable/Developing), 剛開發完的測試程式可以先標記成開發中, 閱讀上就不容易混淆.

做測試自動化應該像是經營精品, 像是PRADA.
也唯有精品化的測試程式, 才能真正確保產品的品質.



-----------------------------------------------------




The objective of test automation is to make sure the quality of the product.
However, when production code is verified by test automation, who is responsible for making sure the quality of test program itself?
I bet nobody tells his/her boss. "Hey boss, we need more test engineers to test our testing code."

What is the impact of test automation with poor quality?
If the results of test automation are not stable, there are usually false positives or false negatives; as a result, test engineers aren't confident of test automation, so they may spend lots of time double checking issues in automation reports, or they may give up test automation and test manually because of the tight project schedule.
In the end, the goal, saving regression testing effort, of test automation is not achieved, but also precious resources are not utilized efficiently. Bad automation is worse than no automation.

Well, now we understand how serious bad test automation is.
How can we improve it?

1. Treat Test Automation as a Project
For test automaton, we should define the goal, the strategy, the schedule, the design, and the resource plan. It should be as formal as product development.
Automation schedule should always align with the product. When defining the goal, quality is more important than quantity; 20 stable cases are worth than 200 unstable cases.
Test automation is code development, too. Source control system is absolutely essential, and it is better to have bug tracking system.
Besides, there should be ways to follow up the development progress. My team has a 15 minutes daily sync-up meeting every morning. We exchange the information about what we accomplished yesterday, what we plan to do today, and what obstacles we met. There is as well as a WBS (Working breakdown structure) to trace all tasks.

2. Code Inspection
Formal code inspection involves multiple participants to review code. It is intended to find and fix bugs, and improves code quality. Everyone may have a blind spot, but it can be covered by others. 
Our practice is that, the scope is 1000 lines of code per review, and whole test team (4 to 6 persons, big team can be divided into feature teams.) is involved. Before the inspection meeting, there is an introducing meeting to give an overview of inspected code by the author; it assists reviewers to read code more efficiently. Participants must review the code before the inspection meeting and take notes of the defects he/she found. 
Inspection should focus on the structure and the logic, and the maintainability (Copy-paste and hard code should be avoided.) of code; lack of comments can be highlighted, too. Typo can be noted but doesn't need to be highlighted in the meeting. A common error - timing issue in test automation should be noticed. Sometimes, while test program waits for the user interface switching or the product response, it is implemented by sleep function to wait a fixed period. However, the switching or response time varies depending on testing machine specs and environment, so test automation is unstable by waiting fixed amount of time; sometimes it works but sometime doesn't. A better way is to communicate by system events or logs created by the product. 
The criteria of inspection should be that if this code is transferred to you, you are comfortable to maintain it. 
During the inspection meeting, Moderator leads it, draws attention to each section of code. Inspectors contribute issues from their preparation logs. Issues are evaluated to determine if they are real defects, and recorder helps document them. The discussion of each issue should be less than 3 minutes. To save time, detailed solutions are not come out during the inspection meeting. In the end of meeting, Moderator lets reviewers vote to decide if this inspection can be finished, and assigns member to follow up issues; or if the code quality is not acceptable, there is another inspection meeting needed. 
Code inspection not only improves code quality, but also brings some benefits. First, people know their code will be reviewed by others, so they develop test more carefully, and may take down more comments. Second, reading code written by experienced engineers is helpful to junior engineers, and lets others know what functions can be reused. If there are personnel changes, the transfer should be easier. 
It takes time to inspect automation code, but it is worthy.

3. Keep Simple
Keep test program, debugging, and reading test reports simple. 
Keeping things simple can free precious time from regular tasks, and people can focus on more important things. It also lets automation get maximum result.
If test program is simple enough, bugs should be few and debugging is intuitive, and code inspection is efficient. Unit testing is a typical example of simple test. In general, single-thread programming debugging is much easier than multi-threads one. 
Simple debugging can be checked by some quick questions. Does test framework support tracing step by step? Does test program collect enough product debug logs? Does test program itself provide adequate logs? Is the environment and the system information recorded when error occurs? For example, taking screenshot at the moment; or taking the snapshot of the virtual machine with support of testing infrastructure. Sufficient information helps debugging and improves test program quality. 
Keeping reading test reports simple means that, when testing reports are sent out, is it easy to understand the quality of product? Auto-generated analysis of testing results can help us check them more systematically and efficiently. For instance, 10 testing machines have executed automation test, and generate their own test reports. You would not want to check the reports one by one; complete analysis of 10 reports can save your time. Another example is that, some test program is new created and is unstable now; can we distinguish it from stable one? My colleague suggested that there be labels for "stable/developing" in testing reports. We can mark new cases as "developing" and are not confused.

Test automation should like luxury goods, such as PRADA.
Only test automation with good quality can make sure the quality of the product.


Wednesday, February 16, 2011

Ten Simple Rules for Good Presentations

原文: http://www.scivee.tv/node/2903

Rule 1: 與觀眾對談
這裡指的不只是面對你的觀眾, 還要盡可能取得眼神上的接觸, 這能幫助你的簡報增加一定程度的親和力. 此外, 請針對你的觀眾設計簡報, 瞭解觀眾的背景及知識程度, 以及他們來參加這場演講想聽到的內容到底是甚麼? 離題的簡報只會讓人頻打瞌睡.
切題是一個好簡報的根本.

Rule 2: 化繁為簡
初學者常常犯的一個錯誤是 - 說得太多. 他們可能急於向觀眾證明自己懂很多. 但說太多的後果就是模糊了焦點, 還浪費了寶貴的Q&A時間.
好的簡報應該清楚明瞭, 並能引導觀眾去思考, 提問. 假如一場簡報中沒有被問任何問題, 那麼很有可能是觀眾根本沒聽懂, 或者是內容太過平淡了. 講太多通常也會導致講太快, 讓觀眾沒辦法吸收重點.

Rule 3: 言之有物
有時候演講者可能很熱心, 想把知道的東西都分享出來.
但請記住你的觀眾的時間跟你的時間一樣重要. 不該為了不完整的內容浪費彼此的時間.

Rule 4: 讓關鍵訊息被記住
有一個很有用的檢測方法: 在簡報後一週, 找個觀眾來問他/她是否記得當時簡報的重點.
根據經驗, 一般人大概能記住3個重點.
如果觀眾能答出簡報設定的3個重點, 恭喜你! 你的簡報非常成功.
如果答出來的重點並不符合你的設定, 那簡報的方向可能有些偏差.
如果完全想不起來的話, 那...革命尚未成功, 同志仍須努力.

Rule 5: 故事性
簡報就像是個故事. 應該有其起承轉合.
先吸引大家進入主題(起), 陳述中心思想(承), 再安排一個強而有力的結尾(合).
這也能讓關鍵訊息更容易被理解.

Rule 6: 簡報台=舞台
簡報應該要能娛樂觀眾. 但切記量力而為. 如果天生的幽默感不夠, 也不用勉強自己在台上扮小丑. 如果不擅於講奇聞軼事, 也不要像背書一樣背稿.
一個好的表演者能加深觀眾的印象, 讓關鍵訊息更容易被記住.

Rule 7: 多練習並試著計時
這對於一個新手來說尤其重要. 更重要的是, 簡報時忠於你所練習的內容. 倘若你對主題不夠熟稔, 又如何能說服台下的觀眾? 百步穿楊及老翁滴油不沾的本領都是出自不斷的練習.
隨著你簡報的歷練越多, 也將越容易得到簡報的機會. 因此不要錯過任何機會, 積極爭取每一次的演出.
重要的簡報也應該先與部分的觀眾排演過, 聽聽觀眾們的建議, 像實驗室的同學或公司的同事就是很好的預演對象, 他們有著相同背景所以能幫忙指出簡報中不足的地方.

Rule 8: 保守並有效地使用視覺效果
簡報有許多種流派, 極少數的人能夠不使用任何視覺效果卻仍能打動人心, 在大部分的情況下簡報者還是需要一些視覺輔助(圖, 表). 準備好的視覺輔助將會是另外十條基本規則.
規則7可以幫助你可以決定適量的視覺效果. 一般來說大約每分鐘有一個視覺效果是最恰當的, 準備太多很容易超時. 當然有些圖表需要講解的時間長, 有些短. 同樣地, 規則7可以幫助你取捨. 盡量避免逐字念稿, 請記住你的觀眾也識字.
視覺效果應該與簡報相輔相成, 或提供一些有力數據來支持你的論點. 切記太過與不及都不好, 讓重點簡單及清晰才是王道.

Rule9: 利用錄音及錄影檢討
沒有比實際當一次自己簡報的觀眾更有效的進步方法了. 聽及看過自己的簡報後, 可以在下一次簡報中改進未臻完美之處. 倘若發現自己有壞習慣, 也應該努力去改正.

Rule10: 適時讚美
人們喜歡被讚美. 但太多無謂的讚美會讓真正有貢獻的人被埋沒. 而且如果違反了規則7, 讚美可能會變得不適當且超時. 最佳的讚美時機應該在簡報的開始, 或是簡報中他人的貢獻很顯著之時.

最後想提醒大家的是, 即使十條規則都遵守, 也不能代表簡報一定能成功.
即使我們有萬全的準備, 臨場與觀眾的互動還是很難預測.
有時候你覺得簡報會進行得很順利, 但隨後卻感覺一團糟.
有時候你很擔心觀眾的想法, 但最後卻是非常開心.
這就是人生, 相當有趣不是嗎? 歡迎留言討論你的想法.

-------------------------------------------------------------------
The original article:http://www.scivee.tv/node/2903

Rule 1: Talk to the Audience

We do not mean face the audience, although gaining eye contact with as many people as possible when you present is important since it adds a level of intimacy and comfort to the presentation. We mean prepare presentations that address the target audience. Be sure you know who your audience is—what are their backgrounds and knowledge level of the material you are presenting and what they are hoping to get out of the presentation? Off-topic presentations are usually boring and will not endear you to the audience. Deliver what the audience wants to hear.

Rule 2: Less is More

A common mistake of inexperienced presenters is to try to say too much. They feel the need to prove themselves by proving to the audience that they know a lot. As a result, the main message is often lost, and valuable question time is usually curtailed. Your knowledge of the subject is best expressed through a clear and concise presentation that is provocative and leads to a dialog during the question-and-answer session when the audience becomes active participants. At that point, your knowledge of the material will likely become clear. If you do not get any questions, then you have not been following the other rules. Most likely, your presentation was either incomprehensible or trite. A side effect of too much material is that you talk too quickly, another ingredient of a lost message.

Rule 3: Only Talk When You Have Something to Say

Do not be overzealous about what you think you will have available to present when the time comes. Research never goes as fast as you would like. Remember the audience's time is precious and should not be abused by presentation of uninteresting preliminary material.

Rule 4: Make the Take-Home Message Persistent

A good rule of thumb would seem to be that if you ask a member of the audience a week later about your presentation, they should be able to remember three points. If these are the key points you were trying to get across, you have done a good job. If they can remember any three points, but not the key points, then your emphasis was wrong. It is obvious what it means if they cannot recall three points!

Rule 5: Be Logical

Think of the presentation as a story. There is a logical flow—a clear beginning, middle, and an end. You set the stage (beginning), you tell the story (middle), and you have a big finish (the end) where the take-home message is clearly understood.

Rule 6: Treat the Floor as a Stage

Presentations should be entertaining, but do not overdo it and do know your limits. If you are not humorous by nature, do not try and be humorous. If you are not good at telling anecdotes, do not try and tell anecdotes, and so on. A good entertainer will captivate the audience and increase the likelihood of obeying Rule 4.

Rule 7: Practice and Time Your Presentation

This is particularly important for inexperienced presenters. Even more important, when you give the presentation, stick to what you practice. It is common to deviate, and even worse to start presenting material that you know less about than the audience does. The more you practice, the less likely you will be to go off on tangents. Visual cues help here. The more presentations you give, the better you are going to get. In a scientific environment, take every opportunity to do journal club and become a teaching assistant if it allows you to present. An important talk should not be given for the first time to an audience of peers. You should have delivered it to your research collaborators who will be kinder and gentler but still point out obvious discrepancies. Laboratory group meetings are a fine forum for this.

Rule 8: Use Visuals Sparingly but Effectively

Presenters have different styles of presenting. Some can captivate the audience with no visuals (rare); others require visual cues and in addition, depending on the material, may not be able to present a particular topic well without the appropriate visuals such as graphs and charts. Preparing good visual materials will be the subject of a further Ten Simple Rules. Rule 7 will help you to define the right number of visuals for a particular presentation. A useful rule of thumb for us is if you have more than one visual for each minute you are talking, you have too many and you will run over time. Obviously some visuals are quick, others take time to get the message across; again Rule 7 will help. Avoid reading the visual unless you wish to emphasize the point explicitly, the audience can read, too! The visual should support what you are saying either for emphasis or with data to prove the verbal point. Finally, do not overload the visual. Make the points few and clear.

Rule 9: Review Audio and/or Video of Your Presentations

There is nothing more effective than listening to, or listening to and viewing, a presentation you have made. Violations of the other rules will become obvious. Seeing what is wrong is easy, correcting it the next time around is not. You will likely need to break bad habits that lead to the violation of the other rules. Work hard on breaking bad habits; it is important.

Rule 10: Provide Appropriate Acknowledgments

People love to be acknowledged for their contributions. Having many gratuitous acknowledgements degrades the people who actually contributed. If you defy Rule 7, then you will not be able to acknowledge people and organizations appropriately, as you will run out of time. It is often appropriate to acknowledge people at the beginning or at the point of their contribution so that their contributions are very clear.
As a final word of caution, we have found that even in following the Ten Simple Rules (or perhaps thinking we are following them), the outcome of a presentation is not always guaranteed. Audience–presenter dynamics are hard to predict even though the metric of depth and intensity of questions and off-line followup provide excellent indicators. Sometimes you are sure a presentation will go well, and afterward you feel it did not go well. Other times you dread what the audience will think, and you come away pleased as punch. Such is life. As always, we welcome your comments on these Ten Simple Rules by Reader Response.

Monday, February 7, 2011

Key to Successful Automation II - Tool

子曰: "工欲善其事, 必先利其器"
想把測試自動化做好, 善用工具是十分重要的.
這篇文章將介紹一些值得推薦的工具, 而且好消息是 - 它們都是免費, 跨平台.

1. STAF (Software Testing Automation Framework)
http://staf.sourceforge.net/
功能非常強大的測試工具.
STAF是一隻daemon process, 提供著各式各樣的服務, 像是電腦間的溝通, 開啟process, 處理檔案系統, 資源管理, 寄電子郵件等等...
功能十分齊全, 也很適合拿來開發自己的測試架構.
某些商業測試軟體也是以STAF為基礎開發.

STAF process可跨OS溝通

網站上有著完整的使用說明, 跨平台 (Windows, Linux, Mac), 穩定度高.
提供程式介面, 可以搭配各種程式語言使用 (Java, C, C++, Python, Perl, Tcl, Rexx).
在測試項(test case)的管理上有STAX輔助, 可以用XML的格式來撰寫測試項, 當然也能與unittest framework像是JUnit搭配使用.
個人使用迄今只有遇到一個問題, 就是在Windows日文64位元平台上會crash.
在選擇開源(open source)軟體時, 最怕的就是沒有人維護, 但STAF維護的狀況相當好, 可以從SourceForge的專案狀態看到:臭蟲不斷地被修正及釋出新版.
http://sourceforge.net/project/stats/?group_id=33142&ugn=staf&type=&mode=year

由於STAF是提供服務, 所以在使用上需要寫一些程式來整合所有的服務.
以下是一段Python的sample code

from PySTAF import *
import sys
try:
# Initialize一個STAF的handle
    handle = STAFHandle("MyTest")
except STAFException, e:
    print "Error registering with STAF, RC: %d" % e.rc
    sys.exit(e.rc)
利用STAF的PING service測試STAF這支process是否存在
result = handle.submit("local", "ping", "ping")
if (result.rc != 0):
    print "Error submitting request, RC: %d, Result: %s" % (result.rc, result.result)
# 利用STAF的VAR service得到電腦的作業系統資訊
result = handle.submit("local", "var", "resolve {STAF/Config/OS/Name}")
if (result.rc != 0):
    print "Error submitting request, RC: %d, Result: %s" % (result.rc, result.result)
else:
    print "OS Name: %s" % result.result
# Uninitialize STAF的handle
rc = handle.unregister()
sys.exit(rc)



2. Selenium
http://seleniumhq.org/
網頁測試工具.
在開始介紹它之前, 先提供一個數據給大家參考.
從Elisabeth Hendrickson的文章"Do Testers Have to Write Code?"中.
業界徵才-測試自動化的需求中, Selenium排名高居第一, 相信大家就能瞭解這個工具有多紅.
Selenium提供了網頁測試所需的各種功能. 打開瀏覽器, 輸入文字, 點擊按鈕, 檢查文字等..
跨平台, 跨瀏覽器, 支援各種程式語言 (一覽), 提供完整的使用說明讓Selenium一炮而紅.
最貼心的是, Selenium還提供Firefox的外掛可將使用者的操作錄成程式碼, 降低了開發的門檻.

以下是官網上的Python的sample code, 結合Python的unittest.
用起來相當直覺.


from selenium import selenium
# This is the driver's import.  You'll use this class for instantiating a
# browser and making it do what you need.

import unittest, time, re
# This are the basic imports added by Selenium-IDE by default.
# You can remove the modules if they are not used in your script.

class NewTest(unittest.TestCase):
# We create our unittest test case

    def setUp(self):
        self.verificationErrors = []
        # This is an empty array where we will store any verification errors
        # we find in our tests

        self.selenium = selenium("localhost", 4444, "*firefox",
                "http://www.google.com/")
        self.selenium.start()
        # We instantiate and start the browser

    def test_new(self):
        # This is the test code.  Here you should put the actions you need
        # the browser to do during your test.

        sel = self.selenium
        # We assign the browser to the variable "sel" (just to save us from
        # typing "self.selenium" each time we want to call the browser).

        sel.open("/")
        sel.type("q", "selenium rc")
        sel.click("btnG")
        sel.wait_for_page_to_load("30000")
        self.failUnless(sel.is_text_present("Results * for selenium rc"))
        # These are the real test steps

    def tearDown(self):
        self.selenium.stop()
        # we close the browser (I'd recommend you to comment this line while
        # you are creating and debugging your tests)

        self.assertEqual([], self.verificationErrors)
        # And make the test fail if we found that any verification errors
        # were found


其它像是AutoIt(Windows使用者介面自動化), Watir(網頁測試工具, Ruby專用)也都是很常見的工具.
善用這些工具, 站在巨人的肩膀上讓我們能夠走得更遠, 更穩.


-----------------------------------------------------------------------------------
"To do a good job, one must first sharpen one's tools." Chinese philosopher Confucius said.
Good tools are prerequisite to the successful execution of test automation.
This article will discuss about some useful tools, and the good news is, they are all free and cross platform.


1. STAF (Software Testing Automation Framework)
STAF is a very powerful testing tool with a variety of functions.
It is a daemon process as a service provider; and it can handle communications between computers, launch process, provide file system utility, manage resource, and send email, etc.
STAF provides basic testing framework utility and can be used to develop advanced testing framework. Some commercial tools are based on STAF, too.

STAF process can communicate between different platforms

In STAF official website, a comprehensive manual can help you adopt it.
STAF is cross platform (Windows, Linux, Mac), and is very stable.
Besides, STAF provides programming interface with Java, C, C++, Python, Perl, Tcl, Rexx, etc, so you can integrate it with your familiar programming language.
About test suite management, STAX supports XML style test case description, or you can choose unittest framework like JUnit to manage test suites.
There is only one problem when I used STAF so far: It crashes on the Windows Japanese X64 platform because of some special double-byte characters.
Maintenance problem should be considered when choosing open source tool. Fortunately, from the statistics of source forge, STAF project status is excellent, bug is fixed and minor version is released frequently.
http://sourceforge.net/project/stats/?group_id=33142&ugn=staf&type=&mode=year


STAF provides services, so team 
Below is sample code in Python.
from PySTAF import *
import sys
try: 
# Initialize a STAF handle
    handle = STAFHandle("MyTest")
except STAFException, e:
    print "Error registering with STAF, RC: %d" % e.rc
    sys.exit(e.rc)
Use STAF's PING service to test if STAF process exists
result = handle.submit("local", "ping", "ping")
if (result.rc != 0):
    print "Error submitting request, RC: %d, Result: %s" % (result.rc, result.result)
# Use STAF's VAR service to get OS information
result = handle.submit("local", "var", "resolve {STAF/Config/OS/Name}")
if (result.rc != 0):
    print "Error submitting request, RC: %d, Result: %s" % (result.rc, result.result)
else:
    print "OS Name: %s" % result.result
# Uninitialize the STAF handle
rc = handle.unregister()
sys.exit(rc)



2. Selenium
A web application testing tool.
Before introducing it, I would like to show you how "Hot" it is now.
From the article of Elisabeth Hendrickson "Do Testers Have to Write Code?", the top automaton technology required for software testers is Selenium.
Selenium provides common utilities for web testing, ex, launching browser, inputting text, clicking button, and asserting text...
It can also support cross platforms, cross browser, cross programming language, and of course, provide a comprehensive user guide.
Besides, the most amazing is that Selenium has Firefox plug-in to record user actions into test scripts, it can eliminate the barriers of adoption dramatically.


Below is the Python sample code from Selenium official website, combined with Python unittest.
The use is very intuitive.

from selenium import selenium
# This is the driver's import.  You'll use this class for instantiating a
# browser and making it do what you need.

import unittest, time, re
# This are the basic imports added by Selenium-IDE by default.
# You can remove the modules if they are not used in your script.

class NewTest(unittest.TestCase):
# We create our unittest test case

    def setUp(self):
        self.verificationErrors = []
        # This is an empty array where we will store any verification errors
        # we find in our tests

        self.selenium = selenium("localhost", 4444, "*firefox",
                "http://www.google.com/")
        self.selenium.start()
        # We instantiate and start the browser

    def test_new(self):
        # This is the test code.  Here you should put the actions you need
        # the browser to do during your test.

        sel = self.selenium
        # We assign the browser to the variable "sel" (just to save us from
        # typing "self.selenium" each time we want to call the browser).

        sel.open("/")
        sel.type("q", "selenium rc")
        sel.click("btnG")
        sel.wait_for_page_to_load("30000")
        self.failUnless(sel.is_text_present("Results * for selenium rc"))
        # These are the real test steps

    def tearDown(self):
        self.selenium.stop()
        # we close the browser (I'd recommend you to comment this line while
        # you are creating and debugging your tests)

        self.assertEqual([], self.verificationErrors)
        # And make the test fail if we found that any verification errors
        # were found


There are some excellent tools as well. For example, AutoIt is a tool for Windows GUI automation, and Watir is a web application testing tool for Ruby.
By leveraging these tools and standing on the shoulders of giants, we are able to go further.