Category Archives: Radiology

FDA Regulation

Some emails this morning prompted me to write this. I have many thoughts on FDA regulations which are mainly formulated by my contact with companies that comply with FDA regulations. Note that I have no experience getting something FDA certified. If you do and want to comment please do so.

First I note that FDA regulation does not produce quality software. Whatever goes on does not include rigorous testing, How do I know? The number of bugs that appear in FDA regulated software. I have seen all sorts of bugs from showing the wrong patient data to crashing on legal DICOM to processing HL7 messages incorrectly. That is to say nothing of crashes, system hangs or other phenomenon. One of my colleagues could crash a leading vendors 3D workstation on demand. It was always funny to do it in their booth and watch their people squirm.

Second some vendors have used and still use it as an excuse to sell hardware at hugely inflated prices. Some are worse than others and at the slightest modification to a system throw up their hands and talk about how they cannot guarantee that their system will work. Once a vendor told us that hooking a trackball to a PACS workstation instead of a mouse could cause the software to click on random things. After all the mouse had not been validated. Fortunately our administrator told them just where they could stick that line. For anyone familiar with software you know that mouse movement is controlled through an abstraction layer and the software has no idea what it is talking to.

So just what is this regulation getting the customer? Not a lot other than expense. Quality software is built by having good developers and employing good software development techniques. People interested in this should check out the book ‘Dreaming in Code’ by Scott Rosenberg.

Now obviously this is anecdotal and you may reader may cry foul and say that this is all not really true. I know of no company that compiles data on software quality in healthcare. Healthcare software companies would throw up all kinds of roadblocks if this was attempted since I think most of them know what would be found. So all we have today in anecdote.

In summary I view the FDA regulation of software as a waste of time. It is good that they try to make sure that companies are not cooking people with radiation and that at some level they keep the pharma companies in line although that is a completely separate and very complicated issue.

GoogleMIRC

It has been a year and a couple of months since GoogleMIRC was shown at RSNA. GoogleMIRC was a radiology vertical search engine that served as a research project. Incidentally it was my last research project at the Baltimore VA. This post will be in place of an article that I have written and rewritten but never thought was really any good. I originally intended to publish an article in Radiographics. This is obviously never going to happen now. Fortunately I can write much more informally and tell you the story of GoogleMIRC.

Before we go any farther I want to acknowledge the other participants in the project who made it possible. None of this would have been possible without Khan Siddiqui. We came up with the idea together in his office while discussing some of the limitations of RSNA’s MIRC project. He worked with me to make it all possible. I want to thank Paul Wheeler currently at Positronic who helped out with a couple of crucial fixes including speeding up the search algorithm and balancing the urls that were sent to the crawler. Also Eliot Siegel whose expectations we constantly tried to exceed. Also thanks to the rest of the group including Woojin Kim, Nabile Safdar, Bill Boonn and Krishna Juluru. Additionally thanks must be offered to everyone whose web server I abused for this project, particularly the University of Alabama teaching file.

Originally GoogleMIRC was conceived as an idea to simply replace the search functionality in MIRC. Khan and I came up with the idea during one of our late afternoon discussions. Every afternoon we had an ice cream break, usually around 4:30 or 5 and discussed interesting things. We discussed adding simply a summary to the search results like google has for each result. MIRC simply showed the title of the case. Also at the time (I don’t know if it is still true) MIRC provided little to no relevance ranking for results. The results were partitioned by which server they came from which is really not what the user is looking for. So with that I set out study search technology. It was a good thing that none of us had any idea what we were getting into. This occurred at the end of January 2006.

The project quickly expanded into covering as many teaching files as possible. We wanted to provide radiologists with a tool that they could use in clinical practice that added value. We judged that radiologists would want to be able to quickly access content that was radiology specific. After all the radiologist wants his information immediately and in a form that allows him to better perform his job. An article about the disease in nature is not particularly useful at the time of diagnosis, not matter how interesting it might be.

I spent the next two months reading and researching search technology. There are a plethora of books, articles and other resources on the topic. My interest in technology which had been waning was definitely recharged. After beginning to understand some of the problems involved (which are immense) I built the first test crawler. It was quite limited being non distributed. It was very impolite since it ignored robots.txt files and tended to hammer servers since it did not throttle requests. I learned a great many important lessons though about how a web crawler works and how to process HTML data.

The processing of HTML data is very nontrivial. First thanks to browsers being very forgiving of web designers HTML that is downloaded is often broken. Missing tags. Unclosed tags. Things that start and stop suddenly. There were many hours spent in the debugger and adding a module to clean the incoming HTML and prepare it for processing. The decision that Netscape made back in the mid 90’s still haunts us today with poorly written HTML. Commercial search engines such as Google and Yahoo do much more with the HTML data including determining word importance by its location in the document and how large the font is relative to other words in the document.

So the first crawler was built in April and by early May I had decided to completely do away with it. I had never really intended it as the final version and it had become a huge mess as I had added features. The new crawler was a distributed crawler with a central controller and services running on different computers that downloaded the pages. It throttled its requests to specific hosts, contacting a remote computer no more than once every 30 seconds. How did the crawl work? Basically I used Radiology Education to seed the crawler with about 400 URLs. Big sites that were not really relevant such as google, microsoft, and flickr were removed by hand. From there the crawl went out and crawled all sites that it found.

By June we crawler was fully functional with a plethora of features such as Whitelist/Blacklist, throttling, a new URL extractor, and code to recrawl a page a couple of times in the event of an error. The crawler at this point was very much improved over what it had been and existed in basically this form for the duration of the project. I also implemented a special component of the crawler for retrieving data from sites running RSNA MIRC software. Since there was a cap on the results that were returned to the user I implemented a paging system that allowed the crawler to retrieve all the results.

In June I started seriously working on indexing. I built an inverted index to allow the text to be searched. I computed PageRank for the currently known graph of urls. The PageRank computation was handled as was described in Larry and Sergey’s original paper, using a single machine and the computation took several hours to run each iteration. I was able to get convergence at around 10 iterations which is consistent with the literature. This was actually a bit more work than these words do justice to. I also began to work on document classification with a Bayesian classifier. The classifier used teaching files from a commercial DVD as training documents. Common words were removed. This classifier did allow us to determine if a page was related to radiology or not by its content. I will note here that this is a very primitive attempt. Using the data we had I could have incorporated a variety of other information into the algorithm such as content on pages that linked to it or that it linked to.

July and August were spent working on various analysis projects as well as building a search algorithm. I used the Vector Space Model because of its simplicity even though it tends to be biased toward shorter documents. In July I had a completely working version although it was still far short of where I wanted it to be. I built a stemmer using porter stemming and built in support for both go and stop words. Stemming reduces words to their root so that radiologist and radiologists would both appear in a search for radiologist. Go words are never stemmed and stop words are words that are not  indexed. Stop words are common words such as a, an, of, etc…

At the end of August I decided to leave the VA for the purposes of commercializing a vertical search engine on the web for radiologists. When I left at the end of September we were in fairly good shape for RSNA. There was still a scramble to polish it for RSNA. It never really reached the point that I wanted it to.

There were many interesting things we found. One was how bad misspelling is on the Internet and even on commercial teaching files. Several that were utilized for various things were definitely not run through spell checker. The crawler was the best working part of the whole system. It was able to sustain about 2 Mbps of traffic and download millions of pages. Further work would be need to make it scale which would include partitioning the URL database and allowing multiple crawl managers to work on different lists of URLs. The crawler was powerful enough to crawl through the radiology portion of the web. One of the reasons that this does not really make a good scientific article is the lack of measurable data. We did not collect data on radiologist satisfaction with GoogleMIRC. We did not measure recall and precision, two traditional measures of search engine quality.

The project had a number of limitations. First was my own choice of technology. I am a heavy .net user and I implemented GoogleMIRC in .net. That was not a bad decision. However I decided to use SQL Server 2005 as the data store. This was a very poor decision that I did not understand the ramifications of at the time. It did have a lot of developer time which I judged to be more valuable for the purposes of the project since I was the only person programming on it. I wish I have known about Lucene at the time and used the .net port of it. That would have saved a tremendous amount of time on building the index and search algorithm and probably led to better results. There definitely would have been more features, like thumbnails. Further more I which I had known about Nutch and Hadoop. When I found them about a year ago I kicked myself. Nutch is an open source search engine built in Java. Hadoop is a distributed computing platform that replicates Google’s infrastructure. Building in Java may have been wiser due to the amount of mathematical open source libraries to perform tasks such as singular value decomposition, a crucial piece of a technique called latent semantic indexing.

Most limitations really centered around the fact that there was only one developer on the project. It is crazy to try to build a search engine yourself. There are a lot of moving moving pieces. It is actually on challenging if you really want to make it scale up since many techniques that work on one machine will not work across multiple machines.

I personally got a tremendous amount out of the project. For instance since I used SQL Server and built my own index and search algorithms I gained a solid understanding of the issues there. I know how to build a crawler that scales reasonably well. Working on a project like this you gain a knew found understanding of the scale of the web.  I tried lots of things that did not work out at the time such as singular value decomposition for finding common concepts in documents that I have since gotten to work.

What comes after? Yottalook builds on many of the idea and leverages Google’s custom search technology. I have not stopped working on search and hope to publicly show what I have been working on this year.

Technorati Tags: ,,,

More overseas reading

So over the weekend I had a few conversations about my overseas reading post. I wanted to further expand on it. A few years ago a well known colleague and I were discussing CAD. I simply remarked that it might be cheaper to use humans in a country with a low cost of labor.

So what might that look like? Imagine you have a guy. This guy reads knee MR. Knee MR only. He is not a radiologist but he is trained. He does not offer final reads, only a pre-read. Over time a radiologist may become comfortable with our guys knee MR reads. Since that is all he will do he should be very good at reading knee MR.

Know the radiologist needs to make sure he is right. Ultimately if there is an error that will come back to the radiologist, not the guy in India, China, or Nigeria. Still when considering the costs and benefits it may be a very effective way to save radiologists time. Each minute of radiologist time saved is more capacity to read cases. As we see consolidation among practices this is sure to emerge to help create some economies of scale.

Technorati Tags: ,

Blog Emergency

Please see Dalai’s post. He is being pressured to censor his blog. Please post comments in support of him. When people place pressure on bloggers who call them out it is truly a sign of cowardice. Blogs are an open forum where anyone can comment. I will not censor any non spam comment posted to my blog. Dissent is good. I am sure Dalai would do the same should you respond.

And to the said company, you know who you are, look at how Microsoft embraced the blog world. Not by censoring the conversation but by being part of it. Intimidation only works so much.

Technorati Tags: ,,

Overseas Reading

There was quite a flutter of conversation this year at RSNA over having Indian radiologists pre-read  studies and having those findings delivered to a US radiologist. Dalai has a post about it. In general almost everyone that I have talked to is completely opposed to overseas reading from radiologists. It will not shock some of my readers to learn that I am not.

So when I was a researcher we conducted numerous reader studies under controlled conditions. For those of you not familiar when researching the impact of new technologies or methodologies on the accuracy of radiologist findings we display carefully selected cases in a controlled manner. There is a specific task such as identify the cervical spine fractures. There is a set of normal cases with no pathology included in the dataset. The process is extremely onerous and tedious. However you get quantifiable data on the radiologists accuracy. So what’s good accuracy? I would say above 80% and the radiologist is quite good. Very few people will cross the 90% barrier.

Now to determine what a good US radiologists accuracy should be obviously you would need a much larger study. But lets for the sake of argument say that the average radiologist is correct 75% of the time. What if an Indian radiologist could show that he was also correct 75% of the time. Why should he not be allowed to read?

A potentially much more valuable but much, much harder to obtain metric would be tying the findings and recommendations of a radiologist to patient outcomes. This is much harder to prove since patient care is a very complicated animal with many players all of whom get to make decisions that may conflict with one another. I just feel that I have to mention this since this would be a truer measure of quality than simply accuracy.

So where does that leave us? Will this happen? No. It would not matter if the Indian radiologist was right 90% of the time over a large number of studies. The ACR would simply say that they were not US trained and could not be as good as American radiologists. As an across the board statement that is most probably true. However I do consider it to be unprovable since I don’t know where the data would come from. If someone else wants to contest that view please do so.

It turns out that it is easy to say that the Indian radiologist would do a bad job. It sounds true to main street America. And more importantly it helps people feel good, which it turns out is more important than being right.

Technorati Tags: ,,

Update on Emageon

I blogged about Emageon recently. Their third quarter was less than stellar. I continue to believe that the mid sized PACS companies will have a difficult time in todays market. They have very little to differentiate them from the major players other than they are smaller. I reiterate that I do believe that the quants are wrong and the fact that these companies have departed from the mean is a sign of a difficult market for them, not because the stock is undervalued. Anyone else have thoughts on this?

Check out the institutional holdings for Emageon.

Technorati tags: , , , ,

A Workflow Example

Here is a great example of a situation that is in need of all kinds of workflow engineering. Tim shows why having detailed operational data is crucial to making business decisions. If there is a deep understanding of a business then solutions like the one Tim discusses will be much easier to justify and can lead to a very high performance organization.

Technorati tags: ,

GE buys Dynamic Imaging

GE has bought Dynamic Imaging. Woah that’s big news. Props to Dalai who got this first. To me this is the clear admission that Centricity was a dead animal. The code base is not designed to operate in a true web environment. Centricity is a clunker with many more technologically advanced rivals, such as Dynamic Imaging.

What does this mean? First I think that M&A activity in healthcare is hot right now even with the private equity guys complaining about access to liquidity. Commissure was acquired by Nuance. Now GE buys Dynamic Imaging. GE is the second major company to buy another company that makes the same product that they do. Philips bought Stentor and replaced their existing PACS with it. I am sure that DIs technology will become front and center has GE’s PACS offering even if they rebrand it Centricity. They would be fools not to.

So I have said in the last few days that I do not see much room for small to mid sized PACS companies. There is another interesting exit strategy for them, selling to a large company to replace its PACS. Maybe it is time for Siemens to pony up and buy Emageon, Amicas, or one of the many privately held PACS companies.

Technorati tags: , , , , ,

Emageon

An offhanded comment from one of my friends this evening had me researching Emageon (EMAG). I was looking for any signs that something was going on with the company. They have had a rough go of it recently as I think small and mid sized PACS companies have had recently and will continue to have. I did uncover a couple of interesting things.

I believe that it has been heavily bought by quantitative hedge funds. What makes me think that? Have a look at the owners with more than a 5% stake. See any big names? I do, D. E. Shaw. They are one of the top quantitative trading firms on Wall Street today. Together the owners 8 firms that own more than 5% of Emageon own 67.3% of the company. I admit that I am stretching a bit since I do not know the exact trading strategies of all these firms. I do know that D. E. Shaw uses statistical arbitrage as a major trading strategy. Read about statistical arbitrage if you are not familiar with it.

What does this mean? A large percentage of the company is owned by firms whose models probably don’t understand the dynamics of the radiology marketplace. Their models show a pricing discrepancy based on historical data and the relation to other stocks. I believe then Emageon may be overvalued in the long term. I think that should the quants have another blood bath like they did in August, or if one of them has liquidity issues the stock could decline rapidly. I don’t see upside with this company from an investment standpoint.

Technorati tags: , , ,

Management shakeup at Amicas

Dalai was the first that I saw to point out that Amicas (AMCS) is waving goodbye to Peter McClennen, its President and COO. I am actually not shocked about this. Amicas is a second tier PACS company that has struggled to find its way. The company is thought of as a growth company but it seems that a lot of time has been spent just to maintain its position in the market. The stock as also lagged since it dropped from $5 to trade now at around $3. Now this is not Vonage territory but something over there is not working.

I am not about to knock Peter. I have known him since I was 16 and he was working at Fuji. I knew him when he worked at GE. I talked to him a couple of times a year at Amicas. He is a stand up individual that has a lot to bring to the table for any company. He is also one of the most enthusiastic people I have ever met.

I think that Amicas is the first company (maybe not) that is showing signs of problems. There are a great many second tier PACS vendors that have struggled to differentiate themselves from the major players, GE, AGFA, Fuji, Philips, and McKesson. Those companies have a very significant market share leaving Amicas, Dynamic Imagaing, Emageon, and a plethora of other companies to pickup the rest.

Over the next few years numerous small PACS companies are going to die. There is simply not enough market share to go around. Those players that are able to differentiate themselves and find niche markets to serve will survive although probably not forever.

A major force in the industry will be the introduction of the native 3D PACS from TeraRecon. The deep integration of 3D is something that will significantly raise the technological bar for all PACS companies. once that happens the first real game changer that causes the core PACS technology to change in a long time, maybe since DICOM, will occur.

The other problem for smaller companies is the exit strategy. There are not a lot of companies in radiology that need a PACS as part of their offerings. So there are not a lot of potential acquirers left. Probably a couple of up and coming EMRs will want one but who can say. I think that for small PACS companies the future looks bleak.

Technorati tags: ,