Technology Thread

IEI

Administrator
Staff member
Nov 10, 2002
14,505
3,340
#61
Google pays Apple $1 billion to be search default

It seemed like an impressive win for Mozilla when the new three-year, $900 million deal with Google was announced. The Foundation derives the bulk of its funding from that search deal and in return Google gets billions of search hits. But that deal may pale in comparison to what Google pays Apple.

According to one analyst, they could be coughing up as much as $1 billion to be the default search engine in Safari on the iPhone, iPad, and Mac systems. There’s plenty of number-crunching that went on to arrive at this figure, but there’s many reasons to believe it’s at least in the ballpark. No phone has been more sought-after than the iPhone since its debut, and the same is true of tablets and the iPad.

Business Insider brings a little hyperbole into the equation, saying that if the iPad continues to grow that it’ll surpass PCs in 5 years — at which point an Apple switch to another search provider could be devastating for Google. They base this speculation on the recent news that iPhoto has dropped Google Maps as a location provider, which some feel is an indication that Apple no longer needs Google at all.

Would such a move really crush Google? No, of course not.

BI is ignoring the insanely rapid growth of both Android and Chrome — where Google is, and always will be the default search option. There’s also a new generation of Google TV boxes on the way. While it would certainly make a dent if Apple signed on with Bing, Google would be just fine on its own. It’s still the only search engine people use as a verb in conversation, after all.
 

IEI

Administrator
Staff member
Nov 10, 2002
14,505
3,340
#62
[video=youtube;nqMdDTv0QqU]http://www.youtube.com/watch?feature=player_embedded&v=nqMdDTv0QqU[/video]
 

IEI

Administrator
Staff member
Nov 10, 2002
14,505
3,340
#63
Canada relaxes rules on foreign ownership of wireless companies, plans spectrum auction for first half of 2013







The Canadian government made a major announcement on telecom policy late this afternoon, revealing a change that opens the door to more foreign ownership of wireless companies -- an issue that's been a point of contention for some time. As The Globe & Mail reports, the new rules will allow for 100 percent ownership of companies with a market share of ten percent or less -- something that can then grow beyond ten percent, so long as it's not done through mergers or takeovers. Previously, total foreign ownership in telecom companies has been restricted to 46.7 percent.

Along with that news, the government also confirmed that the anticipated 700MHz spectrum auction will take place in the first half of 2013 (with a 2500 MHz auction to follow within a year), and that there will be caps in place that are said to "effectively ensure that new wireless entrants and regional providers have access to prime spectrum." The auction will also have some conditions intended to bring service to rural areas, and there will be a block of spectrum reserved for public safety use. All of this, the government says, is intended to "provide Canadian families with more choices at low prices," although we'll naturally have to wait and see if that last bit pans out.
 

IEI

Administrator
Staff member
Nov 10, 2002
14,505
3,340
#64
Seagate breaks 1 terabit barrier, 60TB hard drives possible

In the world of hard drives storage, density is king and Seagate has just achieved a major breakthrough that guarantees major storage gains for the next decade.

That breakthrough is managing to squeeze 1 terabit (1 trillion bits) of data into a square inch or space, effectively opening the way for larger hard drives over the coming years beyond the 3TB maximum we currently enjoy. How much of an improvement does this storage milestone offer? Current hard drives max out at around 620 gigabits per square inch, meaning Seagate has improved upon that by over 50%. However, that’s just the beginning.

The 1 terabit barrier has been broken due to the use of a new type of recording known as heat-assisted magnetic recording, or HAMR. We first heard about HAMR back in 2009 when Seagate started discussing their research into laser heating. Then in October last year TDK introduced the tech.

HAMR works by adding a small laser in the drive head that heats the surface of the platter. By doing so the magnetic field intensifies and it’s possible to pack many more bits into the same area.

Initially the use of HAMR in Seagate’s drives should see their capacity double. So a 3.5-inch drive will be offered at up to 6TB and a 2.5-inch drive at up to 2TB. The HAMR tech does scale very well though, with the upper limit eventually being 10 terabits per square inch. At that density we would see 3.5-inch drives with 60TB of capacity and 2.5-inch drives would max out at 20TB.

With the introduction of HAMR, Seagate has effectively guaranteed hard drives will continue to play a role in the storage market for the next decade. Within 5 years we could all be carrying around double-digit terabyte drives in our laptops.
 

IEI

Administrator
Staff member
Nov 10, 2002
14,505
3,340
#65
The successor to the PlayStation 3, apparently codenamed "Orbis," will use an AMD x86 processor with an AMD "Southern Islands" GPU, according to rumors emerging last week. Xbox 360's replacement, purported to be named "Durango", is also rumored to use an AMD GPU—either a Southern Islands variant or an equivalent to a Radeon HD 6670—this time paired with a PowerPC CPU.
Though these rumors are thoroughly unconfirmed at the moment, they're all well within the realm of plausibility. But if they prove true, the Orbis and Durango will be decidedly mid-range at launch when compared to top-of-the-line PC hardware. The Xbox 360, launched November 2005, and the PlayStation 3, launched November 2006, were both cutting-edge systems at their release. Their capabilities were unmatched by PCs of the time. If these rumors are to be believed, the eighth console generation won't be a repeat of the seventh.
[h=3]The stupendous seventh generation[/h] The Xbox 360's Xenon processor, a three-core six-thread PowerPC unit running at 3.2 GHz, had a theoretical peak number crunching throughput of 115 gigaflops. A contemporary Pentium 4 at 3 GHz had a theoretical peak of around 12 gigaflops when the system launched. The PlayStation 3 was in a similar situation; its Cell CPU, jointly developed by IBM, Toshiba, and Sony, had a theoretical throughput of 230 gigaflops. Contemporary Core 2 Duos topping out at 24 gigaflops at the time—and cost many hundreds of dollars to boot.
The GPUs found in these systems were not quite so impressive compared to those available in desktop systems at launch, but they were still high-end. Xbox 360's Xenos was built by ATI, falling somewhere between the capabilities of its R520 (sold as the Radeon X1800 series, released in October 2005), and its R600 (retailed as the Radeon 2900 series, released in May 2007). The PlayStation 3's Reality Synthesizer was designed by NVIDIA, as a slightly cut-down G71 (marketed as the GeForce 7900 series, released in mid-2006).
In short, the (theoretical) CPU performance of the current generation consoles was out of this world when they launched. Their GPUs went toe-to-toe with discrete cards costing as much as the consoles themselves.
[h=3]The (potentially) unexceptional eighth generation[/h] The Southern Islands GPU, shipping in the HD 7970, has been on sale for three months already. With neither next-generation console likely to hit the market until 2013 (and probably late 2013 at that), Southern Islands will be the best part of two years old when those systems finally hit. Southern Islands is a fast and powerful GPU, but it's already lost the top performance spot, displaced by NVIDIA's brand new GTX 680. It'll be falling further behind with the launch next year of AMD's Sea Islands GPU architecture. If the next-generation Xbox really does use a Radeon HD 6670 part, it'll be even less impressive.
Estimates of CPU performance are harder to make, given the dearth of information about these consoles. Being realistic, we can't expect any great leaps for the CPU either. If AMD could produce processors that were competitive with or superior to current shipping x86 processors, it would be doing so. Unfortunately for AMD, its newest Bulldozer architecture hasn't reached the performance levels the company originally announced. The next-generation PlayStation CPU could be a Bulldozer derivative, or it might be based on the company's low-power Bobcat design. In either case, it's unlikely to boast the kind of remarkable theoretical performance that the Cell claimed relative to its contemporaries.
Seventh-generation consoles leapfrogged the top-level PC performance of the time. The systems were enormously powerful, and enormously expensive to build. Both Microsoft and Sony sold them at a considerable loss for their first few years on the market. Thanks to these subsidies, they offered phenomenal value for the gamers' dollar, affording gaming experiences that would be prohibitively expensive for PC gamers to mimic at launch. If the current architecture rumors prove to be true, eighth-generation consoles aren't going to pull off the same feat. They'll be a substantial step up from current console hardware, sure. But they likely won't be able to offer the same wow-factor the seventh generation did.
If Sony and Microsoft have indeed slowed down their console hardware arms race, building for more modest specifications instead, then this could be good news for everybody—except perhaps console gamers.
[h=3]The cutting edge has lost its point[/h] Cutting-edge hardware is expensive to produce. While Microsoft could probably stomach another round of massively subsidized gaming hardware, Sony probably can't. Subsidized hardware is a risky proposition. More modest systems, selling perhaps at break-even at launch, are much more palatable to shareholders and beancounters alike. Nintendo and Apple have both demonstrated that selling hardware profitably can be done successfully. This is certainly the more sustainable model for the long-term health of the industry.
Cutting-edge hardware is also, arguably, pointless for a new console. While PC gamers can always slap on a huge 2560×1600 or 2560×1440 monitor—something that taxes even dual high-end video cards these days—consoles are for the most part limited to the 1920×1080 at 60 Hz that HDTV sets allow for. 3D sets, which ideally need 120 frame per second inputs, do raise the bar somewhat, but speccing the GPU for this niche audience would be a foolhardy endeavor. It would make the GPU more expensive for 100 percent of customers, with benefits seen only by a handful.
Contemporary CPUs are already overkill for many games. Developers have struggled to exploit the large numbers of hardware threads that processor designs now support. Even a good-looking and moderately physics-rich game such as Battlefield 3 rarely demands more than three cores of a current Intel Sandy Bridge processor. There are games that can take more advantage of multiple cores, but they're the exception, not the rule. As long as the CPU is at least adequate, the GPU is probably the best place to invest money

With a 1080p60 graphical upper limit and recognition of the complexities of multithreaded programming, there isn't a compelling case for building hardware that's streets ahead of what we have today.
[h=3]Media machines[/h] Keeping the hardware inexpensive is also important for another reason. Consoles aren't just used for games. Xbox LIVE Gold subscribers spend more hours per month watching streaming TV than they do playing games. This is a burgeoning market that greatly expands the appeal of games consoles—console gaming is still a niche activity; watching TV isn't.
Streaming media has mainstream appeal in a way that games won't achieve for another decade or two. It's an audience worth going after, but it changes the economics of console hardware development substantially. The game consoles can be subsidized and sold at a loss because each game also includes a cut for Sony or Microsoft. As long as gamers buy a handful of games, the money can be recouped. Boxes used predominantly for streaming media don't provide access to that same revenue stream.
Microsoft does still make money from some streaming media users, since many services are locked behind its Xbox LIVE Gold paywall. But with competition from other set-top boxes with comparable streaming capabilities and no monthly cost, it's not clear if this is sustainable. To accommodate, selling the hardware can't incur losses—which means it can't include expensive, high-end components.
[h=3]Good news for developers[/h] The more conventional system architecture would be good news for developers. The current Cell architecture in the PlayStation 3 has proven difficult for developers to make the most of. Its design—a single PowerPC core with eight simple but fast vector cores (of which six are usable by third-party developers)—is quirky. The Xbox 360, with its three identical cores and six hardware threads, and the PC are both easier to use and understand.
This is not to say that the next generation PlayStation will necessarily be identical in design to a PC (though that has been tried before, with the original Xbox). Sony and AMD might have a few custom tricks up their respective sleeves. AMD's plan is to produce highly-integrated systems-on-chips, and the company has said that it's keen to include additional processing units in these designs. It's easy to envisage a custom-produced design that combines perhaps 2 or 4 CPU Bulldozer or Bobcat threads and a Southern Islands GPU—both "standard" AMD parts—with, for example, a high-speed memory unit, or a dedicated vector processing unit similar to those found in the Cell processor.
A conventional design means developers can take full advantage of the hardware much earlier in its lifecycle. As a rule of thumb, games released later in a console's life look better than those released earlier. Early in the console's life, developers don't yet know the best way to wring out every last bit of performance from the system. The more unusual and complex the architecture, the longer it takes to understand how best to use it.
While the hardware companies might not like it, developers like systems that aren't strange outliers. Most major games from major publishers are not exclusive to any one platform. Huge franchises like Call of Duty are cross-platform titles, released for Xbox 360, PlayStation 3, and PC. As a result, these games tend to be developed based on the lowest common denominator. A PlayStation 3 might be particularly good at a particular task (a fancy graphical effect, say), but if the Xbox 360 and PC aren't equally adept at that same task, cross-platform developers will have no option but to ignore the PlayStation 3's aptitude, or spend a lot of development time tuning a version specifically for the system.
This might mean platform exclusives don't have any special capabilities to take advantage of, but with platform exclusives normally negotiated according to studio ownership or cash payments—rather than the nature of the hardware in question—the impact of this is likely minimal.
[h=3]An AMD victory[/h] If AMD has scored the GPU design win for the next Xbox, and both the CPU and GPU designs for the next PlayStation, this is enormously good news for the company. It will provide a steady stream of income for many years to come.
It might also help the company undermine NVIDIA's attempts to court game developers. NVIDIA's "The Way It's Meant To Be Played" promotional program sees NVIDIA work with developers to some extent to help market or develop their games. In theory, TWIMTBP games are developed on and developed (or at least, optimized) for NVIDIA hardware. In practice, the extent varies; some games are developed on NVIDIA hardware with NVIDIA offering advice with performance tuning. For others, the branding is applied only after development has been completed, purely so that publishers can take advantage of NVIDIA's marketing and promotional dollars.
At a minimum the games should run reliably on the company's hardware; it may or may not contain additional tuning to ensure optimal performance on it.
With both next-generation consoles using Southern Islands, it's inevitable that games for these consoles will be developed on, and developed for, AMD GPUs as their first priority. NVIDIA will still have a role to play, as its GPUs will continue to be found in PCs. But with consoles taking the lion's share of the market for most games, optimization for NVIDIA is unlikely to ever rival that for AMD.
[h=3]What about the gamers?[/h] While bad news for NVIDIA, it's probably worse news still for another demographic: current PlayStation 3 owners. The radical shift in architecture, from Cell with NVIDIA graphics to x86 with AMD graphics, means that the next generation PlayStation is unlikely to offer backwards compatibility with existing titles (rumors are already pointing towards Sony removing this feature, in fact). Emulating Cell on the CPU will be impossible, as the CPU simply won't be fast enough.
Sony could potentially integrate a Cell processor into the new system. The company did a similar thing with the PlayStation 3; initial models included the PlayStation 2's Emotion Engine for backward compatibility. Then Sony dropped the chip as a cost-saving measure in 2007. Adding hardware purely for backwards compatibility is hard to justify on a cost basis: the older games have limited appeal to new buyers, and even existing PS3 owners could continue to use their old hardware. There's an outside chance the GPU could be roped in to allow Cell emulation, or that a vector co-processor could be integrated into the CPU. But in all likelihood, the next PlayStation will break from Sony's backwards compatibility trend.
Console gamers of all kinds may also be disappointed the new machines won't be as tremendous a leap over current systems as past systems have been. Consoles have already been eclipsed by PCs—with a result that games like Battlefield 3 offer PC players larger maps with more players than the consoles can cope with—and it looks like that will still be the case come the eighth generation.
If current rumors are to be believed, the next generation of Sony and Microsoft consoles will gain performance parity with PCs, but not much more. Consoles will still have their advantages—the range of peripherals, the plug-and-play simplicity, the reduced maintenance, the low up-front cost—but they won't be able to offer best-in-class gaming, even at their debut. For that, only a PC will do.
 

IEI

Administrator
Staff member
Nov 10, 2002
14,505
3,340
#69
[h=2]Iran moving ahead with plans for national intranet[/h]
By Ryan Paul | Published about 21 hours ago
Iran topped a recent list of repressive regimes that most aggressively restrict Internet freedom. The list, published by Reporters Without Borders, is a part of the 2012 edition of the organization’s Enemies of the Internet report. One of the details addressed in that report is the Iranian government’s bizarre plan to create its own “clean” Internet. The proposed system, an insular nationwide intranet that is reportedly isolated from the regular Internet, would be heavily regulated by the government.
Reporters Without Borders drew attention to Iran’s national Internet plan when it was first proposed in 2011. The organization says that the system "consists of an Intranet designed ultimately to replace the international Internet and to discriminate between ordinary citizens and the 'elite' (banks, ministries and big companies), which will continue to have access to the international Internet."
In addition to developing its own Intranet system, Reporters Without Borders says that the Iranian government is also creating its own custom electronic mail service and a national search engine called Ya Haq (Oh Just One) that is intended to replace Google. In order to obtain an account on the state-approved mail service, users will have to register their identity with the government.
According to an article published by Fast Company in February, Iran's national Internet system represents one of the "most ambitious effort yet by any government to censor the Internet." The content available over the national network will be tightly controlled. It will block access to foreign websites and services for communicating with the outside world.
The Iranian government has steadily increased the intensity of its Internet censorship efforts. It's a response to the increasingly important role that the Internet has played in enabling political dissent and unencumbered communication. The supreme leader of Iran, Ayatollah Ali Khamenei, recently established a Supreme Council of Cyberspace to regulate Iran’s new Internet.
Cleric Hamid Shahriari, who is a member of the council, said the group was “worried about a portion of cyberspace that is used for exchanging information and conducting espionage,” according to an article published last month by the Wall Street Journal.
The complaint about espionage is likely a reference to Stuxnet, an unusual computer virus that was designed to sabotage industrial machinery. The virus wreaked havoc on Iran’s controversial nuclear program, reportedly setting it back by as much as two years. The attack proved to be a major embarrassment for the country. Ars Technica was among the news sites banned in Iran after reporting on the incident.
Reporters Without Borders characterized Iran’s national Internet scheme as “frequently announced and always postponed” in its 2012 report due to frequent deployment delays. The group says that Iran’s existing Internet filtering system, which the government uses to selectively block access to websites and social networks, already represents an extreme form of Internet censorship. The national Internet plan, says Reporters Without Borders, is likely just a political gesture at this point.
Recent reports that claimed the Iranian government will deploy the network within five months are said to be inaccurate. According to AFP, reports of an August launch date were likely based on a hoax and have since been denied by government officials. The actual timeline for the launch is still unknown.
Iranians currently rely on proxies and other related tools to circumvent the country’s existing censorship system. It’s unclear if such methods will continue to work if the country eventually manages to put its national Internet plan into practice.

Update: the article was updated to indicate that the launch of the national network is not planned for August as was previously reported.

source
http://arstechnica.com/tech-policy/...internet-launch-its-own-clean-alternative.ars
 

IEI

Administrator
Staff member
Nov 10, 2002
14,505
3,340
#70
[h=2]Welcome, Discovery![/h] "Think we’ll be able to see it from here?"

Working three miles from Washington-Dulles Airport meant that my coworkers and I had a pretty good chance of seeing Discovery, especially from the top floors of our building, and potential viewing locations were a frequent topic of discussion in the days leading up to its arrival.
Not wanting to take a chance, I decided to head to the Udvar-Hazy Center, the Smithsonian’s Air and Space Museum annex near the airport to watch Discovery arrive. The parking lot was opened at 8 AM, and by the time I arrived an hour later, almost all two thousand spaces were filled. The crowds had spread out in search of viewing spots, and I decided to head for the berm that flanked the museum.

welcome_discovery_-4f8e1c8-intro.jpg

welcome_discovery_-4f8e1cd-intro.jpg
welcome_discovery_-4f8e1d3-intro.jpg
welcome_discovery_-4f8e1ce-intro.jpg
welcome_discovery_-4f8e1c8-intro.jpg
welcome_discovery_-4f8e1c1-intro.jpg
welcome_discovery_-4f8e1b9-intro.jpg
 

IEI

Administrator
Staff member
Nov 10, 2002
14,505
3,340
#75
This is interesting:
Half of PC users use pirated software in the world one way or another

[h=1]Half Of PC Users Are Pirates, Says Study[/h]ver half of PC users worldwide have admitted to using pirate software last year, according to a study by the trade group Business Software Alliance (BSA).BSA’s ninth annual Global Software Piracy Study has shown a sharp increase in software piracy, especially among emerging economies. In the UK, more than one in four programs users installed in 2011 were unlicensed.
[h=2]Flying the Jolly Roger[/h]In a survey of around 15,000 computer users from a total of 33 countries around the world, 57 percent admitted to using pirated software, up from 42 percent the year before. The BSA estimates that the global annual cost of software piracy has reached $63.4 billion (£40b).

UK is firmly below the global average, with just 27 percent of computer users admitting they have acquired software illegally last year. This translates into an approximate £1.2 billion loss by the software industry.
According to the study, young men are much more likely to use unlicensed software than any other demographic. 28 percent of professed software pirates in the UK are under 34 years old, and 79 percent are male.
“As the UK enters a double-dip recession, it has never been more important to protect the creative industry’s intellectual property and its vital contribution to the economy. However, to do so we need to fundamentally change the way we view and acquire software,” says Julian Swan, director of compliance marketing at BSA EMEA.
The study discovered that more than three quarters (77 percent) of UK PC users surveyed do not think the risk of getting caught is an effective deterrent to software piracy.
According to the UK law, the maximum amount of damages the software developers can claim is equivalent to the cost of the software license. The BSA is calling for a stronger damages law, including double damages, to stop the increase in illegal software use.
The study has also found that computer users in emerging markets are more likely to use pirated software than in mature ones – 68 percent against 24 percent respectively.
By its sheer scale, China has the most troubling piracy problem. Its illegal software market was worth nearly £5.5 billion in 2011 versus a legal market of less than £1.7 billion.
[h=2]Walking the plank[/h]According to BSA, on average only 20 percent of software pirates consider current enforcement measures a sufficient deterrent to their activities

“It is clear that the fight against software piracy is far from over. Although emerging markets are of the greatest concern, the problem is still persisting in mature markets, in which one in four admit to using pirated software. One of the more troubling issues is that business decision makers purchase some legitimate copies but then turn a blind eye to further (illegal) installations for new users, locations and devices,” said Robin Fry, commercial services partner at DAC Beachcroft.
“Although, the legal framework currently in place in the UK generally serves the software industry well, readily accessible enforcement could be improved. As an organisation we endeavour to assist our members in protecting their products and take to task those who illicitly seek to exploit them. However, the existing legislative process can be unduly wieldy – so much so that many businesses, and enforcement agencies, are put off,” commented Julian Heathcote Hobbins, general counsel at Federation Against Software Theft.
“It is all very well having the IP rights in place, but unless we can improve the practical enforcement measures, the effectiveness of the laws will be blunted,” he added.
We should note that the previous BSA reports have been criticised by some members of the industry as “propaganda”.

BSA has recently exercised its power by working out a settlement worth £10,000 with the Blackpool-based company George Morrison over its illegal use of Microsoft and Autodesk products.