Apple’s initial legal victory over rival HTC in a patent infringement suit could pave the way for Apple to collect high royalties from devices running Google Android, according to one analysis.
Mike Abramsky with RBC Capital Markets believes that Apple has the upper hand over HTC, which is a smaller handset maker with a limited portfolio of intellectual property. As such, Apple could potentially push for an injunction and ask the U.S. International Trade Commission to bar the import of HTC handsets.
Instead, Abramsky believes it’s more likely that Apple will try to establish a high royalty precedent on Android devices. He said the iPhone maker could garner a deal that’s similar to or even higher than the $5 per unit that Microsoft collects on HTC Android devices.
From Apple Insider
$5 a unit. Wow. But Apple has some remarkable technology covered by the patents at issue. For example
When the handler 44 requests a facsimile transmission, for example, the real-time function block issues commands to start the real-time engine and install the various modules that are needed for it to function as a virtual telephone. Binary facsimile image data is transferred to the real-time engine via the FIFO buffers, where it is encoded as PCM data which is further encoded according to the transport medium over which it is to be transmitted. If the adapter is connected to a telephone line, for example, these signals can be encoded as 16-bit pulse-code modulated (PCM) samples, and forwarded directly to the adapter 36 via the serial driver 42 . Alternatively, if the transport medium is an ISDN line, the modem signals are encoded as mulaw-companded 8-bit PCM signals. The different types of encoding are stored in different tables, and the appropriate one to be used by the real-time engine is installed by the real-time function block during the initial configuration of the engine and/or designated by the API 48 at the time the command to transform the data is issued.
Too stunning. There should be a Nobel prize in there – at least. Take a look.
CS101 – Introduction to Computing Principles
From Stanford! When I taught CS 101 at the crappy state school, we did some coding in C, some algorithm analysis, some architecture, some assembly. The assembly helped give people a sense of architecture and to see how code could be data and vice-versa. I was really behind the times.
Why is it that the hardware interfaces of disk drives 30 years ago were simple and clear and they currently require enormously elaborate drivers? Imagine if one could provide a command consisting of
(read/write, diskaddress, buffer_address, count, notification_address)
so that the disk drive controller wrote a completion code in the notification_address when done. Would that be hard? Why is it that virtualization hardware on modern processors devotes tens of millions of transistors to functionality that is not useful in any sensible virtual machine kernel but makes interrupt emulation into a nightmarish mish mash. Why are obvious bottlenecks in multi-core performance, like snooping caches made more an more elaborate and non-deterministic in operation when it’s been known for 30 years that they don’t scale at all? Why is it that the painfully obvious design flaws of USB can never be corrected even as it becomes more and more unavoidable?
Eventually, we’ll reach a breaking point where someone builds a Corolla or a Tesla and the manufacturers of power guzzling, unreliable, legacy hardware have to scramble to catch up.
Seen on Linux Weekly News.
Ext4 maintainer Ted Ts’o has responded with a rare (for the kernel community) admission that technical concerns are not the sole driver of feature-merging decisions:
It’s something I do worry about; and I do share your concern. At the same time, the reality is that we are a little like the Old Dutch Masters, who had take into account the preference of their patrons (i.e., in our case, those who pay our paychecks :-).
One of those rare moments when art, commerce, and engineering collide to produce comedy.