spreadsheet

Sales sheet

Super store sales chart

data

Monday, August 6, 2012


Multimedia in Digital Audio Workstations

            Multimedia is text, graphics, video, animation, audio and other media that can be used to help an organization efficiently and effectively achieve its goals [4]. Today’s DAW software offers many advantages over traditional Analogue recording techniques. The graphics generated by the DAW act as a visual guide to correctly identifying issues in recordings that could only be identified previously by listening. An engineer’s ear is still the most valuable tool, but the graphics allow precise measurement of time down to the millisecond as well as frequency identification (both can be measured by ear, but are subject to human error).  As with DSS, DAW do not take the place of decision maker, but rather offer resources that will enable the user to make the best choices to complete the objective, in this case, a quality recording.
            A common issue with recording is phase coherence. Phase is the relationship of two or more signals coming from microphones on the same sound source [1].  It is common recording practice to use multiple microphones for a single instrument to get the best sound. If the two recordings are out of phase, the result when the two recordings are combined is a bad sound that can cancel each other out.  This occurs when microphone A receives the signal at the +peak of the wave form and microphone B receives the signal at the –peak of the wave form, when combined the two waves cancel each other out and produce an output of zero.  The DAW produces a graphical representation of the two waves that allows the engineer to move one of the signals so that the two waveforms are aligned producing a phase coherent recording. 
            Another extremely useful tool in recording and especially mixing is a frequency analyzer.  Frequency analyzers are usually offered as a plugin for DAWs and take the Audio file and display the frequencies that are being played. An experienced Audio Engineer can accurately identify frequencies, but it can be a very difficult skill to acquire. New engineers are usually able to identify problem frequencies to a degree, but with the help of an analyzer, can pinpoint exactly where the problem lies. Also some frequencies are difficult to hear, but are necessary for the overall mix of sound. 20-40 Hz are felt more than heard, and as we age, the ability to hear high frequency greatly diminishes. It’s fairly common for people who are over 25 years of age to not be able to hear above 15kHz [2].  So the use of frequency analyzers are valuable for the fact that, We can’t always hear what’s down there, but with the right tools, we can see it [3].
            In closing, multimedia in DAW offers a means to measure sound in a visual format, giving the engineer another point of reference while editing and mixing. 


           














4.Textbook, page 334

Wednesday, August 1, 2012

Clinical Decision Support Systems

Clinical Decision Support Systems

 Clinical Decision Support is a process for enhancing health-related decisions and actions with pertinent, organized clinical knowledge and patient information to improve health and healthcare delivery. [1]. Any decision is best made when all the facts are considered. In an effort to increase patient safety, Clinical Decision Support Systems can aid a physician to make the correct diagnosis and treatment plan. CDS systems utilize EMRs (Electronic Medical Records) along with Medical databases to produce medical treatment plans that would best suit the patient. The doctor will take this information and make the final decisions regarding treatment.

 One of the major advantages of CDS systems is the access granted to cliniciansIf you are visiting a family doctor it is likely they will have all of your medical history and can make informed decisions without the use of CDS. But in emergency situations, a patient could be incapacitated and not able to relay pertinent information such as drug allergies. EMR permits more than one user to access your record at the same time[2]. So while in the ambulance, the EMTs can access your information and at the same time the hospital can prepare for your arrival. With CDS, the Doctor can make a more informed decision because he has a greater understanding of the patient’s history.

 As with any DSS, flexibility is key to the successful operation of CDS. Aside from patient history updates… Medical knowledge is expanding, new drugs and diagnoses are continually being discovered, and evidence-based guidelines change as new evidence is accumulated [3]. Database must be continually updated in order for the output to be correct. Frequent inaccurate alerts can lead the clinicians to ignore all of the CDS advice[3].  Knowledge management is vitally important for the DCS to function properly. Just as the Doctor cannot make an informed decision without all the facts, neither can the CDS.

 In closing there are many advantages to CDS systems. Essentially CDS systems act as a second opinion that can aid your Doctor in making the best decision for you. The meta-analyses of studies of alerts and reminders for decision support have been fairly consistent in showing that they can alter clinician decisionmaking and actions, reduce medication errors, and promote preventive screening and use of evidence-based recommendations for medication prescriptions[3].   CDS systems offer information that aid Doctors in diagnosis and treatment of patients to ensure the best care possible.

1. http://www.himss.org/asp/topics_clinicalDecision.asp 
2. http://www.ssmedcenter.com/about/emr.cfm
3. http://healthit.ahrq.gov/images/jun09cdsreview/09_0069_ef.html    

Monday, July 30, 2012

Super Sales Chart steps


Steps.
1.Logged into Zoho
2. Clicked existing database, then super store sales and copied database.
3. added row for costumer tim lawrence and made up other data to fill the row.
4. Added custom formula for profit margin. Profit/costX100
5. Changed datatype to percent and limited decimal places to 1.
6. changed avg sales in a day chart from bar to line chart and added profit as text.
7. posted to Blog


Enterprise 2.0
            The popularity of companies that integrate consumer participation is on the rise.  Andrew McAffee defines Enterprise 2.0 as “the use of emergent social software platforms within companies, or between companies and their partners or customers”[1]. I believe the driving force behind the success of Enterprise 2.0 is the belief that Brands don’t always tell the truth… but peers typically do [2]. In order for a company to build trust amongst their consumers, they must enable the consumer to participate in reviews that are open to the public. The consumer producing reviews are referred to as Prosumers, a key component of enterprise 2.0. Proliferation on user generated (prosumer) content has created competition (on the part of the attention of consumers) for professionally produced content from established content industries [3].
            Creating a successful business means establishing trust between producers and consumers. By involving “prosumer” reviews, consumers can see an unbiased opinion of the company or product. Our customers,  Prosumers, need to realize and see manifested in our business model that we don’t have a hidden agenda, help them to learn from their experience with us that they can trust us [4].
            Ebay.com and Amazon.com are both examples of how successful Enterprise 2.0 can be. Customers are not coaxed, persuaded, or suckered; they are encouraged to do research, exercise their own judgment, and participate in the multilog of discussions. Amazon and eBay are primary examples of how to create added value through transparency and participation. In this way, these companies represent the “new” Web[5]. Both companies feature user generated reviews that build trust between consumers and producers. In this business model, customers are not only served; they are also integrated into the transaction. More specifically, they drive the transaction; they make (as in: build) the business [5].
            In conclusion, Enterprise 2.0 is an effective way of building a company by generating trust through peer-produced content. The success of Ebay and Amazon emphasize the effectiveness of this approach. E-commerce must incorporate and encourage user interaction to gain competitive advantage in today’s market.










Wednesday, July 25, 2012



Distributed Processing

            Distributed processing is when processing is split up over multiple servers or computers.  I chose to research this more because in the previous chapters, the emphasis was in cutting cost by consolidating computing and database. I remember thinking during the readings that it seemed quite vulnerable to attack and natural disasters. So I was glad to see in chapter four that that is exactly what is leading companies to invest in more distributed systems.
            The more I research the more advantages I discovered. Depending on the needs of the company, it can be cheaper to use multiple cheaper computers than investing in a supercomputer. Reliability is another advantage of distributed computing. Hardware glitches and software anomalies can cause single-server processing to malfunction and fail, resulting in a complete system breakdown. Distributed data processing is more reliable, since multiple control centers are spread across different machines. A glitch in any one machine does not impact the network, since another machine takes over its processing capability. Faulty machines are quickly isolated and repaired. This makes distributed data processing more reliable than single-server processing systems [1]. Speed is another advantage of distributed processing, as more computers are added, it gets faster and faster. Single computers are limited in their performance and efficiency. An easy way to increase performance is by adding another computer to a network. Adding yet another computer will further augment performance, and so on. Distributed data processing works on this principle and holds that a job gets done faster if multiple machines are handling it in parallel, or synchronously. Complicated statistical problems, for example, are broken into modules and allocated to different machines where they are processed simultaneously. This significantly reduces processing time and improves performance [1].
(Shore Tech Systems developed A distributed processing program for Sys Consulting LTD that reduced total processing time per job from 2 minutes to 10 seconds.[2])
            Distributed Processing also opens the opportunity for people to support causes such as cancer research by doing nothing more than allowing their computer to be included in the distributed efforts of research groups. World Community Center Grid[3] is a website that enables people to donate unused computer time to aid in there various efforts. In 2003, with grid computing, in less than three months scientists identified 44 potential treatments to fight the deadly smallpox disease. Without the grid, the work would have taken more than one year to complete.[4]




 Web 2.0 is a description of how the Web currently operates, and places heavy value on user contribution and content. Web 2.0 is a platform that enables users access to video, text, audio, and even computing all in one place. Software as a service and cloud computing allow for users to utilize software without having to install it on their computer. Web 2.0 makes for a more connected marketplace by connecting people with more resources.
The concept of "Web 2.0" began with a conference brainstorming session between O'Reilly and MediaLive International. Dale Dougherty, web pioneer and O'Reilly VP, noted that far from having "crashed", the web was more important than ever, with exciting new applications and sites popping up with surprising regularity. What's more, the companies that had survived the collapse seemed to have some things in common. Could it be that the dot-com collapse marked some kind of turning point for the web, such that a call to action such as "Web 2.0" might make sense? We agreed that it did, and so the Web 2.0 conference was born [1]. Essentially Web 1.0 features little or no interactivity between producers and consumers. The marketing strategy that had served well in TV and print was not as effective in the new medium that encourages user participation. Web 2.0 tools can be used to do what traditional advertising does: persuade consumers to buy a company's products or services. An executive can write a blog, for instance, that regularly talks up the company's goods. But that kind of approach misses the point of 2.0. Instead, companies should use these tools to get the consumers involved, inviting them to participate in marketing-related activities from product development to feedback to customer service [2]. The websites that were able to adapt to this new environment are the ones that make up Web 2.0.

 Crowd sourcing is an important aspect of Web 2.0. Open source material makes large tasks possible as the work is divided. Every small unit of contribution is important to a Web 2.0 service. Millions of such contributions eventually lead the website to state of higher relevance. For instance, any conventional Media company (employing hundreds of reporters) has today been easily beaten by blogging platforms like Blogger and WordPress in producing extremely frequent and relevant content as millions of users are acting as a contributor, building up a large resource within much lesser span of time [3].  Open source software and services are the backbone of what makes up Web 2.0.
 In conclusion, Web 2.0 features advances in technology that democratizes information and encourages user content and feedback.



 The Web 2.0 Conference put together a list that demonstrates Web 1.0 vs. Web 2.0
Web 1.0

Web 2.0
DoubleClick
-->
Google AdSense
Ofoto
-->
Flickr
Akamai
-->
BitTorrent
mp3.com
-->
Napster
Britannica Online
-->
Wikipedia
personal websites
-->
blogging
evite
-->
upcoming.org and EVDB
domain name speculation
-->
search engine optimization
page views
-->
cost per click
screen scraping
-->
web services
publishing
-->
participation
content management systems
-->
wikis
directories (taxonomy)
-->
tagging ("folksonomy")
stickiness
-->
syndication
[1]


 Sources


Monday, July 23, 2012


Picture of Timothy Lawrence
Soundcloud
by Timothy Lawrence - Sunday, 22 July 2012, 06:43 PM
Soundcloud.com has a large database of peoplesongs, and groups. They manage all of this data with a search tool that can be narrowed down to specific attributes that will yield a more specific result.
 Typing in a keyword in the search bar will yield results for the three main entities (peopleSongs, and Groups). At which point, the user can limit the results to any one of the three. Two of the three entities (people and songs) can be broken down to search specific attributes.
The advanced search for People offers more specific options in locating within the database. Searches can be refined by username, country, city, or category (musician/promoter/publisher…). 
The advanced search for Songs allows users to search with genre/track type/set type/duration/beats per minute/file type/label/release date/ and license options. 
Soundcloud uses a relational database in that all data is linked in it’s context.  A result for a song is linked with the artist and group. This approach creates a very user-friendly interface, which furthers the company’s value by offering a competitive advantage.

Answers to Questions:

1. A large database with many shared attributes (many songs have the same name as they are not copyrightable).
2. It is a very user-friendly interface, making it easy for people to find what they are looking for.
3. Adding an advanced search for Groups with categories such as users/purpose/location.


Search This Blog