The da Vinci Surgical System (sic) is a robotic surgical system made by the American company Intuitive Surgical. Approved by the Food and Drug Administration (FDA) in 2000, it is designed to facilitate complex surgery using a minimally invasive approach, and is controlled by a surgeon from a console. The system is commonly used for prostatectomies, and increasingly for cardiac valve repair and gynecologic surgical procedures. The name of the system, Da Vinci, is derived from that great artist we know since his study in anatomy had finally led the appearance of the first robot in human history.
The da Vinci System consists of a surgeon’s console that is typically in the same room as the patient, and a patient-side cart with four interactive robotic arms controlled from the console. Three of the arms are for tools that hold objects, and can also act as scalpels, scissors, bovies, or unipolar or hi. The surgeon uses the console’s master controls to maneuver the patient-side cart’s three or four robotic arms (depending on the model). The instruments’ jointed-wrist design exceeds the natural range of motion of the human hand; motion scaling and tremor reduction further interpret and refine the surgeon’s hand movements. The da Vinci System always requires a human operator, and incorporates multiple redundant safety features designed to minimize opportunities for human error when compared with traditional approaches.
Moreover, the da Vinci System has been designed to improve upon conventional laparoscopy, in which the surgeon operates while standing, using hand-held, long-shafted instruments, which have no wrists. With conventional laparoscopy, the surgeon must look up and away from the instruments, to a nearby 2D video monitor to see an image of the target anatomy. The surgeon must also rely on a patient-side assistant to position the camera correctly. In contrast, the da Vinci System’s design allows the surgeon to operate from a seated position at the console, with eyes and hands positioned in line with the instruments and using controls at the console to move the instruments and camera.
In this case, the appearance of this kind of surgical system provides much more benefits than before. The system provides doctors superior visualizations, precision and comforts, thus increasing the efficiency of treatment. Also, the robots always deliver smaller incisions, which will bring the patient less pain. It could also minimize the time for patient to stay in hospital, which will reduce the cost of medical treatment as well.
Although the general term "robotic surgery" is often used to refer to the technology, this term can give the impression that the da Vinci System is performing the surgery autonomously. In contrast, the current da Vinci Surgical System cannot – in any manner – function on its own, as it was not designed as an autonomous system and lacks decision making software. Instead, it relies on a human operator for all input; however, all operations – including vision and motor functions— are performed through remote human-computer interaction, and thus with the appropriate "weak AI" software, the system could in principle perform partially or completely autonomously. Nevertheless, this system still brings new hope for the development of medical study.
Reference:
1. https://en.wikipedia.org/wiki/Da_Vinci_Surgical_System
Picture Reference:
1. https://www.google.com/search?q=da+vinci+surgical+system&biw=1608&bih=889&source=lnms&tbm=isch&sa=X&ved=0ahUKEwjR8Nmp7v7PAhWMOiYKHTiTA5cQ_AUIBigB#imgrc=t_RoTEef53Z-UM%3A
2016/10/28
2016/10/21
Coding Theory
When talking about the word "code", we always naturally relate it to the region of "security". Inevitably, as the rapid development of technology, we have more chances to get in touch with "codes". In fact, besides the security, code has various applications nowadays. Coding theory is just such a branch subject of mathematics and computer science which mainly focus on the properties and applications of codes. Codes are used for data compression, cryptography, error-correction, and networking. Codes are studied by various scientific disciplines—such as information theory, electrical engineering, mathematics, linguistics, and computer science—for the purpose of designing efficient and reliable data transmission methods. This typically involves the removal of redundancy and the correction or detection of errors in the transmitted data.
The application of code can be divided into several groups. One of the main use of the code is for the error check, or error correction. Channel code is the most common type of the error correction code. The purpose of channel coding theory is to find codes which transmit quickly, contain many valid code words and can correct or at least detect many errors. While not mutually exclusive, performance in these areas is a trade off. So, different codes are optimal for different applications. The needed properties of this code mainly depend on the probability of errors happening during transmission. In a typical CD, the impairment is mainly dust or scratches. Thus codes are used in an interleaved manner. The data is spread out over the disk.
Linear code is a main branch of channel code and denotes the sub-field of coding theory where the properties of codes are expressed in algebraic terms and then further researched. It basically can be divided into two types of codes--linear block code and convolution code. Linear block codes have the property of linearity, i.e. the sum of any two code words is also a code word, and they are applied to the source bits in blocks, hence the name linear block codes. There are block codes that are not linear, but it is difficult to prove that a code is a good one without this property. On the other hand, the idea behind a convolutional code is to make every code word symbol be the weighted sum of the various input message symbols. This is like convolution used in LTI systems to find the output of a system, when you know the input and impulse response.
Cryptography or cryptographic coding is the practice and study of techniques for secure communication in the presence of third parties (adversaries). That realm includes the "decipher" technique, which could be always used as a theme in films. Cryptography is about constructing and analyzing which used to block the adversaries. Modern cryptography exists at the intersection of the disciplines of mathematics, computer science, and electrical engineering. Applications of cryptography include ATM cards, computer passwords, and electronic commerce.
Line coding is another division in code theory. It is a code chosen for use within a communications system for base band transmission purposes. Line coding is often used for digital data transport.
Line coding consists of representing the digital signal to be transported by an amplitude- and time-discrete signal that is optimally tuned for the specific properties of the physical channel (and of the receiving equipment). The waveform pattern of voltage or current used to represent the 1s and 0s of a digital data on a transmission link is called line encoding.
Reference:
1. https://en.wikipedia.org/wiki/Coding_theory
Picture Reference:
1. https://www.google.com/search?hl=zh-CN&biw=1608&bih=889&site=imghp&tbm=isch&sa=1&q=code&oq=code&gs_l=img.3..0l10.45860.46434.0.46562.4.4.0.0.0.0.95.209.4.4.0....0...1c.1.64.img..0.4.208.2Zdh8Ba_wPU#imgrc=O8UQxDjtGXgLlM%3A
The application of code can be divided into several groups. One of the main use of the code is for the error check, or error correction. Channel code is the most common type of the error correction code. The purpose of channel coding theory is to find codes which transmit quickly, contain many valid code words and can correct or at least detect many errors. While not mutually exclusive, performance in these areas is a trade off. So, different codes are optimal for different applications. The needed properties of this code mainly depend on the probability of errors happening during transmission. In a typical CD, the impairment is mainly dust or scratches. Thus codes are used in an interleaved manner. The data is spread out over the disk.
Linear code is a main branch of channel code and denotes the sub-field of coding theory where the properties of codes are expressed in algebraic terms and then further researched. It basically can be divided into two types of codes--linear block code and convolution code. Linear block codes have the property of linearity, i.e. the sum of any two code words is also a code word, and they are applied to the source bits in blocks, hence the name linear block codes. There are block codes that are not linear, but it is difficult to prove that a code is a good one without this property. On the other hand, the idea behind a convolutional code is to make every code word symbol be the weighted sum of the various input message symbols. This is like convolution used in LTI systems to find the output of a system, when you know the input and impulse response.
Cryptography or cryptographic coding is the practice and study of techniques for secure communication in the presence of third parties (adversaries). That realm includes the "decipher" technique, which could be always used as a theme in films. Cryptography is about constructing and analyzing which used to block the adversaries. Modern cryptography exists at the intersection of the disciplines of mathematics, computer science, and electrical engineering. Applications of cryptography include ATM cards, computer passwords, and electronic commerce.
Line coding is another division in code theory. It is a code chosen for use within a communications system for base band transmission purposes. Line coding is often used for digital data transport.
Line coding consists of representing the digital signal to be transported by an amplitude- and time-discrete signal that is optimally tuned for the specific properties of the physical channel (and of the receiving equipment). The waveform pattern of voltage or current used to represent the 1s and 0s of a digital data on a transmission link is called line encoding.
Reference:
1. https://en.wikipedia.org/wiki/Coding_theory
Picture Reference:
1. https://www.google.com/search?hl=zh-CN&biw=1608&bih=889&site=imghp&tbm=isch&sa=1&q=code&oq=code&gs_l=img.3..0l10.45860.46434.0.46562.4.4.0.0.0.0.95.209.4.4.0....0...1c.1.64.img..0.4.208.2Zdh8Ba_wPU#imgrc=O8UQxDjtGXgLlM%3A
2016/10/14
3D modeling
3D modeling is a branch of computer animation, and it is also called 3D computer graphics. Those vivid figures which showing on the film screen are actually all derived from 3D modeling. 3D modeling is actually the process of
developing a mathematical representation of any three-dimensional surface of
an object (either inanimate or living) via specialized software. The product is called a 3D model. It can be
displayed as a two-dimensional image through a process called 3D rendering or
used in a computer simulation of physical phenomena. The model can
also be physically created using 3D printing devices. Models may be created automatically or manually. The manual modeling process of preparing geometric data for 3D computer graphics is similar to plastic arts such as sculpting.
Three-dimensional (3D) models represent a physical body using a collection of points in 3D space, connected by various geometric entities such as triangles, lines, curved surfaces, etc. Nowadays, 3D modeling has a wide field of applications. The medical industry uses models as organs or CT scans; The movie and computer game industry uses this technique to create figures and characters; The engineering community uses models to design new devices as well. 3D models can also be the basis for physical devices that are built with 3D printers or CNC machines.
Almost all 3D models can be divided into two groups. One type is solid models--These models define the volume of the object they represent (like a rock). These are more realistic, but more difficult to build. Solid models are mostly used for nonvisual simulations. The other is called "shell" or "boundary" models---these models represent the surface, e.g. the boundary of the object, not its volume (like an infinitesimally thin eggshell). These are easier to work with than solid models. Almost all visual models used in games and film are shell models.
The process of building up models can be mostly divided into three steps. The fist step is called "polygonal model", which is the step to build up vertices and connect to line segments. The next step is curve modeling, which is about the surfaces and contours. The last step is called digital sculpting, which is actually the "programming" step. The data and algorithm will be combined with the model which will finally present the 3D model on the screen.
Compared with 2D modelling, 3D models have its own advantages. First is the flexibility, which is the ability to change angles or animate images with quicker rendering of the changes; Then it is also easy to rendering and improve the accuracy of photorealism. Thus, 3D modeling has a great potential in the market.
Reference:
1. https://en.wikipedia.org/wiki/3D_modeling
Picture Reference:
1. https://en.wikipedia.org/wiki/3D_modeling#/media/File:Utah_teapot_simple_2.png
Three-dimensional (3D) models represent a physical body using a collection of points in 3D space, connected by various geometric entities such as triangles, lines, curved surfaces, etc. Nowadays, 3D modeling has a wide field of applications. The medical industry uses models as organs or CT scans; The movie and computer game industry uses this technique to create figures and characters; The engineering community uses models to design new devices as well. 3D models can also be the basis for physical devices that are built with 3D printers or CNC machines.
Almost all 3D models can be divided into two groups. One type is solid models--These models define the volume of the object they represent (like a rock). These are more realistic, but more difficult to build. Solid models are mostly used for nonvisual simulations. The other is called "shell" or "boundary" models---these models represent the surface, e.g. the boundary of the object, not its volume (like an infinitesimally thin eggshell). These are easier to work with than solid models. Almost all visual models used in games and film are shell models.
The process of building up models can be mostly divided into three steps. The fist step is called "polygonal model", which is the step to build up vertices and connect to line segments. The next step is curve modeling, which is about the surfaces and contours. The last step is called digital sculpting, which is actually the "programming" step. The data and algorithm will be combined with the model which will finally present the 3D model on the screen.
Compared with 2D modelling, 3D models have its own advantages. First is the flexibility, which is the ability to change angles or animate images with quicker rendering of the changes; Then it is also easy to rendering and improve the accuracy of photorealism. Thus, 3D modeling has a great potential in the market.
Reference:
1. https://en.wikipedia.org/wiki/3D_modeling
Picture Reference:
1. https://en.wikipedia.org/wiki/3D_modeling#/media/File:Utah_teapot_simple_2.png
2016/10/07
The history of Software Engineering
Software engineering is the application of engineering to the design, development, implementation, testing and maintenance of software in a systematic method. It covers the knowledge in both computer science and engineering realm, so it could also be thought as a "edge discipline". From its beginnings in the 1960s, writing software has evolved into a profession concerned with how best to maximize the quality of software and of how to create it. As people pursued higher quality of working environment and system, which included "high speed, usability, testability" etc, the software needs to be renewable and able to be upgraded. Thus, software engineering plays a main role in this realm.
The origin of Software engineering can be traced back to around 1960. The word "Software engineering" was used in the lecture of Dr.Douglass T.Ross in MIT around that time. In 1968, the Nato Science Committee sponsored two lectures which were about "Software engineering and development", which greatly supported the development of Software engineering in the future.
However, during the period of 1965 to 1985, the software realm faced a crisis that influenced the software engineering a lot during that time. The software crisis was originally defined in terms of productivity, but evolved to emphasize quality. Some used the term software crisis to refer to their inability to hire enough qualified programmers. The software crisis showed some problems in the development of software. Those problems included cost and budget overrun, property damage, etc.
During 1985 to 1989, people in the software projects realm tried hard to fix those problems in the crisis, but it was not so successful. Nevertheless, after the development of Internet, the situation totally changed.
From 1990 to 1999, the Internet rapidly rose. Programmers could finish their projects at a rate never before seen by the Internet. The growth of browser usage, running on the HTML language, changed the way in which information-display and retrieval was organized. The widespread network connections led to the growth and prevention of international computer viruses on MS Windows computers, and the vast proliferation of spam e-mail became a major design issue in e-mail systems, flooding communication channels and requiring semi-automated pre-screening. Millions of computer users shows the coming of a new era.
In 21 century, software engineering has already become an essential part in computing and electronic engineering with the rapid development of Internet. And it will be developed at a even faster rate in the future.
Reference:
1. https://en.wikipedia.org/wiki/History_of_software_engineering
Picture Reference:
1. http://image.baidu.com/search/detail?ct=503316480&z=0&ipn=d&word=软件开发&step_word=&hs=0&pn=0&spn=0&di=124786773650&pi=0&rn=1&tn=baiduimagedetail&is=0%2C0&istype=0&ie=utf-8&oe=utf-8&in=&cl=2&lm=-1&st=undefined&cs=1566657118%2C2450386672&os=880228100%2C169314304&simid=3958628153%2C353214665&adpicid=0&ln=1940&fr=&fmq=1475895302940_R&fm=&ic=undefined&s=undefined&se=&sme=&tab=0&width=&height=&face=undefined&ist=&jit=&cg=&bdtype=0&oriquery=&objurl=http%3A%2F%2Fwww.qykh2009.com%2Fupload%2Feditor%2F1402556742741.jpg&fromurl=ippr_z2C%24qAzdH3FAzdH3Fooo_z%26e3Bqyhidaal_z%26e3Bv54AzdH3Fgjof_1jp_d89b_z%26e3Bip4s&gsm=0&rpstart=0&rpnum=0
The origin of Software engineering can be traced back to around 1960. The word "Software engineering" was used in the lecture of Dr.Douglass T.Ross in MIT around that time. In 1968, the Nato Science Committee sponsored two lectures which were about "Software engineering and development", which greatly supported the development of Software engineering in the future.
However, during the period of 1965 to 1985, the software realm faced a crisis that influenced the software engineering a lot during that time. The software crisis was originally defined in terms of productivity, but evolved to emphasize quality. Some used the term software crisis to refer to their inability to hire enough qualified programmers. The software crisis showed some problems in the development of software. Those problems included cost and budget overrun, property damage, etc.
During 1985 to 1989, people in the software projects realm tried hard to fix those problems in the crisis, but it was not so successful. Nevertheless, after the development of Internet, the situation totally changed.
From 1990 to 1999, the Internet rapidly rose. Programmers could finish their projects at a rate never before seen by the Internet. The growth of browser usage, running on the HTML language, changed the way in which information-display and retrieval was organized. The widespread network connections led to the growth and prevention of international computer viruses on MS Windows computers, and the vast proliferation of spam e-mail became a major design issue in e-mail systems, flooding communication channels and requiring semi-automated pre-screening. Millions of computer users shows the coming of a new era.
In 21 century, software engineering has already become an essential part in computing and electronic engineering with the rapid development of Internet. And it will be developed at a even faster rate in the future.
Reference:
1. https://en.wikipedia.org/wiki/History_of_software_engineering
Picture Reference:
1. http://image.baidu.com/search/detail?ct=503316480&z=0&ipn=d&word=软件开发&step_word=&hs=0&pn=0&spn=0&di=124786773650&pi=0&rn=1&tn=baiduimagedetail&is=0%2C0&istype=0&ie=utf-8&oe=utf-8&in=&cl=2&lm=-1&st=undefined&cs=1566657118%2C2450386672&os=880228100%2C169314304&simid=3958628153%2C353214665&adpicid=0&ln=1940&fr=&fmq=1475895302940_R&fm=&ic=undefined&s=undefined&se=&sme=&tab=0&width=&height=&face=undefined&ist=&jit=&cg=&bdtype=0&oriquery=&objurl=http%3A%2F%2Fwww.qykh2009.com%2Fupload%2Feditor%2F1402556742741.jpg&fromurl=ippr_z2C%24qAzdH3FAzdH3Fooo_z%26e3Bqyhidaal_z%26e3Bv54AzdH3Fgjof_1jp_d89b_z%26e3Bip4s&gsm=0&rpstart=0&rpnum=0
Subscribe to:
Comments (Atom)



