|
Post by Admin on Apr 18, 2021 18:21:29 GMT
DOS GAMING PC GETS NECESSARY UPDATES hackaday.com/2021/04/17/dos-gaming-pc-gets-necessary-updates/PC-104 is a standard computer form factor that most people outside of industrial settings probably haven’t seen before. It’s essentially an Intel 486 processor with lots of support for standards that have long since disappeared from most computers, but this makes it great for two things: controlling old industrial equipment and running classic DOS games on native hardware. For the latter, we turn once again to [The Rasteri] who is improving on his previous build with an even smaller DOS gaming rig, this time based on a platform even more diminutive than PC-104. The key of a build like this is that it needs native support for the long-obsolete ISA bus to be able to interface with a SoundBlaster card, a gold standard for video games of the era. This smaller computer still has this functionality in a smaller package, but with some major improvements. First, it has a floating point unit so it can run games like Quake. It’s also much faster than the PC-104 system and uses less power. Finally, it fits in an even smaller case. The build goes well beyond simply running software on a SoM computer. [The Rasteri] also custom built an interface board for this project, complete with all of the necessary ports and an ISA sound chip, all while keeping size down to a minimum. The new build also lets him give the build a better name than the old one (although he phrases this upgrade slightly differently), and will also let him expand some features in the future as well. Be sure to check out that first build if you’re new to this saga, too.
|
|
|
Post by Admin on May 1, 2021 19:28:55 GMT
Hackboard 2 Is a $140 Windows 10 Pro Single-Board Computer Powered by a dual-core Intel Celeron processor and 4GB of RAM, there's also an Ubuntu version for $99. uk.pcmag.com/old-desktop-pcs/130488/hackboard-2-is-a-140-windows-10-pro-single-board-computerThe Raspberry Pi continues to be the most-popular choice for single-board computers, especially considering the very low price point. But what if you want a single-board computer capable of running Windows 10? The Hackboard 2 offers just that for a surprisingly low price. Hackboard 2 was created by a team spread across Austin, London, and Shezhen. As The Hackboard website explains, the idea was formed very early in the coronavirus pandemic when Quantum Engineering CEO Mike Callow came up with the idea of "creating a small, low-cost, Windows-powered and Intel-based computer for children, parents, and educators who wouldn’t normally be able to afford one."
|
|
|
Post by Admin on May 2, 2021 17:52:15 GMT
VGA GRAPHICS CARD IN 74XX LOGIC hackaday.com/2021/04/30/vga-graphics-card-in-74xx-logic/Feeling nostalgic we presume, [Glen Kleinschmidt] set out to build a 640x480x64 VGA controller card from discrete logic chips. If we ignore the 512Kx8 Cypress SRAM video memory, he succeeds, too — and on a very readable, single page A3 schematic. The goal is to interface some of his older 8-bit machines, like the TRS-80 Model 1 and the BBC Micro, but for now he’s running a demo using a 20+ year old PIC16F877 micro.
|
|
|
Post by Admin on May 4, 2021 19:17:08 GMT
|
|
|
Post by Admin on May 10, 2021 20:29:22 GMT
What Is COBOL and Why Is It in Demand? BY JENNIFER SEATON PUBLISHED OCT 15, 2020 Job listings in parts of the US have started demanding COBOL knowledge. But what is this programming language? www.makeuseof.com/what-is-cobol/The 60-year-old programming COBOL is experiencing a resurgence. Many government system mainframes run COBOL and have been struggling to deal with a surge in demand. In particular, the state of New Jersey’s unemployment systems is administered by a 40-year-old COBOL mainframe. With the surge in unemployment connected to COVID-19, the system is struggling to keep up. Governor Murphy has identified COBOL programmers as an under-appreciated necessity. What Is COBOL? In 1959, the Committee on Data Systems Languages designed COBOL. They wanted to design a standard programming language to run on many different mainframes. At that time, many new programming languages were being developed, and translating programming languages to run on new hardware was becoming too expensive. COBOL, or common business-oriented language, was the solution to this problem. COBOL was based on the programming language FLOW-MATIC, which was created by Grace Hopper. It was the first programming language to use English terms for data processing instead of mathematical notation. Grace Hopper explained: “I used to be a mathematics professor. At that time I found there were a certain number of students who could not learn mathematics. I then was charged with the job of making it easy for businessmen to use our computers.” (The Early Development of Programming Languages pg. 29) Similarly, COBOL uses English terms and was designed to be easy to read. However, some have criticized it for being too wordy. For example, in C you might write the following to add two numbers: int result = 1 + number; The same code in COBOL would be written as: ADD 1 TO number GIVING result Much like C, COBOL is a procedural programming language. This simply means that COBOL programs are designed to follow sequential steps. COBOL is also a self-documenting language, which adds to its usability. However, the most well-known feature of COBOL is that can handle massive amounts of data processing.
|
|
|
Post by Admin on May 11, 2021 14:23:49 GMT
TRENDnet 5-port Unmanaged 2.5G Switch review - Introduction www.guru3d.com/articles-pages/trendnet-5-port-unmanaged-2-5g-switch-review,1.html We review the TEG-S350 switch from TRENDnet. We've been evangelizing for years now to achieve faster Ethernet. That trend has started towards PC motherboards, NAS units, and routers; but lagging behind are affordable multi GigE switches, and yeah we're not sure why? Here in my office we already made to move towards 10 GigE, it's very expensive to do so though, but the limiting factors are the (expensive) switches. Older model 10 GigE are all-optical/fiber-connected. That's now going to work in for your home-based situation, to the transition towards RJ45 was mandatory> slowly but steadily that is now making its way to the market. Today we test a TRENDnet TEG-S350 5-Port 2.5G. While still expensive and not up there matching 10G speeds, this might be a nice alternative., Available in 5 and an 8 port version, all ports offer full-duplex 2.5G RJ-45 jacks. And that means your traditional CAT5E and better cabling will support these speeds just fine. The current 1 Gigabit standard has a throughput of a maximum of 128 MB/sec minus fault tolerance and things like QoS. This year it seems mother motherboards and NAS units all are fitted with 2.5G connectors. And that means this could be a sweet spot switch as all connectors offer 2.5G connectivity, bringing your throughput towards 312 MB/sec minus fault tolerances and the rest.
|
|
|
Post by Admin on May 12, 2021 0:14:40 GMT
TOPIC Understanding edge computing Cloud computing has led many organizations to centralize their services within large datacenters. However, new end-user experiences like the Internet of Things (IoT) require service provisioning closer to the outer "edges" of a network, where the physical devices exist. www.redhat.com/en/topics/edge-computingWhat is edge computing? Edge computing is computing that takes place at or near the physical location of either the user or the source of the data, which results in lower latency and saves bandwidth. In a cloud computing model, compute resources and services are often centralized at large datacenters, which are accessed by end users at the edge of a network. This model has proven cost advantages and more efficient resource sharing capabilities. However, new forms of end-user experiences like IoT need compute power closer to where a physical device or data source actually exists, i.e. at the network’s "edge." By placing computing services closer to these locations, users benefit from faster, more reliable services with better user experiences, while companies benefit by being better able to process data, support latency-sensitive applications, and use technologies like AI/ML analysis to identify trends and offer better products and services.
|
|
|
Post by Admin on May 12, 2021 14:57:44 GMT
New Nebulae Backdoor Linked with the NAIKON Group April 28, 20212 Min Read labs.bitdefender.com/2021/04/new-nebulae-backdoor-linked-with-the-naikon-group/DLL hijacking is a malware execution technique that hardly needs any introduction. But while spotting DLL hijacking vulnerabilities would get mots security researchers bounty or a mention in a hall of fame, our investigation of sideloading techniques in several vulnerable applications led to the discovery of a long-running operation of a notorious APT group known as NAIKON. Unlike previous NAIKON operations, the one documented in the whitepaper below features a secondary backdoor that has an important role in persistence. We called it Nebulae. Who is NAIKON? NAIKON is a threat actor that has been active for more than a decade. Likely tied with China, the group focuses on high-profile targets such as government agencies and military organizations in the South Asia region. Targets During our investigation, we identified that the victims of this operation are military organizations located in Southeast Asia. The malicious activity was conducted between June 2019 and March 2021. At the beginning of the operation, the threat actors used Aria-Body loader and Nebulae as the first stage of the attack. From our observations, starting with September 2020, the threat actors included the RainyDay backdoor in their toolkit. The purpose of this operation was cyber-espionage and data theft. Mitigation Bitdefender enables organizations to contend with APT style attacks with GravityZone endpoint detection and response (EDR) and managed detection and response (MDR) services that apply the MITRE ATT&CK framework for identifying and remediating security incidents throughout the entire attack kill chain.
|
|
|
Post by Admin on May 12, 2021 15:14:57 GMT
SMALLEST DISCRETE TRANSISTOR 555 TIMER hackaday.com/2021/05/11/smallest-discrete-transistor-555-timer/Over at Tiny Transistor labs, [Robo] took it upon himself to reproduce the classic 555 timer in discrete transistor form. For bonus points, he also managed to put it in a package that’s the same basic size, pin compatible with, and a plug-in replacement for the original. The first task was deciding which 555 circuit to implement. He examined a handful of different implementations — and by examined, we mean dissected them and studied the die circuitry under a microscope. In the end, he went with Hans Camenzind’s original circuit, both as a tribute and because it used the fewest transistors — a point which helped manage the final size, which is only a little bit bigger than the IC! Speaking of sizes, have you ever soldered an EIA 01005 resistor? We agree with [mbedded.ninja] who wrote on a post about standard chip resistor sizes, the 01005 is a “ridiculously small chip package that can barely be seen by the naked eye.” It is 16 thou x 8 thou (0.4 mm x 0.2 mm) in size, and despite its name and placement in the Imperial series, it is not half the size of an 0201. The transistors are your standard 2N3904 / 2N3906, but purchased in a not-so-standard DFN (Dual Flat Pack, No Leads). We might think a 1.0 x 0.6 mm component as small, but compared to its neighboring resistors in this circuit, it’s huge. [Robo] has done this kind of project before, most recently making a discrete recreation of of the classic 741 op-amp. We covered a similar, but larger, discrete 555 timer project back in 2011. If you want to go really big-scale with your own reproduction project, check out the MOnSter 6502 from five years ago for further inspiration. Thanks to [Lucas] for the tip. Posted in Misc Hacks Tagged 555, discrete transistors, Hans Camenzind, recreation, reproduction
|
|
|
Post by Admin on May 12, 2021 18:14:23 GMT
IBM’s New Chip Technology Shows Off the Next Big Step in Moore’s Law By Jason Dorrier -May 09, 202153,498 singularityhub.com/2021/05/09/ibms-next-generation-chip-tech-shows-off-next-step-in-moores-law/Increasingly, modern life depends on how skillfully we shuttle electrons through the nanoscale mazes etched on computer chips. These processors aren’t just for laptops anymore—they’re used in your car, your thermostat, your refrigerator and microwave. And the pandemic has revealed just how deeply our dependence runs. A global shortage of computer chips, brought on by vacillating demand and supply chain issues, is currently rippling through device-makers, of course, but also makers of cars, vacuum cleaners, and stove vents. Clearly, we’re hooked. So, perhaps it’s no surprise that when companies announce better, faster, more efficient computer chips, the world takes notice. This week, it was IBM’s turn to make headlines. The company, once synonymous with all things computing, announced that it’s demonstrated a 2-nanometer (nm) chipmaking process for the first time. In a press release, IBM said the new process would yield some 50 billion transistors on a chip the size of a fingernail. It would also bring chips that are 75 percent more efficient or 45 percent faster than today’s 7-nm chips. On its face, it would seem IBM just leapt far ahead in the race for top chip tech. Intel’s latest chips use a 10-nm process and TSMC’s use a 7-nm process. And the company has made some very cool and notable progress here. But comparing chips is complicated. So, it’s worth dissecting the news a bit more to better understand the bigger picture. Nanometer to Nanometer Is Apples to Oranges Progress in computer chips has long been measured in nanometer-sized steps. Each step down yields ever more components—most notably, transistors—packed into the same area. And there was a time, in decades past, when the nanometer nomenclature actually did match the size of certain chip elements. But that time has passed. As chip technology advanced, the measurements of chip components decoupled from each generation’s naming convention. By the time chips made the last big leap to FinFET—a 3D transistor design shaped like a fin—a little over a decade ago, the industry’s node number was virtually meaningless. It didn’t relate to any dimension on the chip. There’s currently a debate over what new number, or combination of numbers, better reflects progress. And although this too is proving rather complicated, one spec experts propose is transistor density per square millimeter. To see how the old naming convention is confusing, compare Intel’s 10-nm chips with TSMC’s 7-nm chips. The two actually have roughly equivalent transistor densities, with Intel’s 100 million transistors per square millimeter actually edging out TSMC’s 91 million per square millimeter. (Go here for a handy table comparing process size and transistor density of chips.) IBM didn’t announce transistor density explicitly. But after reaching out to clarify exactly what sized “fingernail” they were referencing—representatives of the company said about 150 square millimeters—the publication AnandTech calculated IBM’s new process would yield some 333 million transistors per square millimeter. Which is, indeed, beyond anything in production. That said, a 3-nm chip TSMC is making for Apple could boast almost 300 million transistors per square millimeter and enter production as soon as next year. Nanosheets: The Next Step in Moore’s Law? Perhaps the more significant news is the design of the transistors themselves. IBM’s new technology—called nanosheet or gate-all-around transistors—is the long-awaited successor to today’s FinFET transistors. The company has been working on the tech since 2017. FinFET transistors consist of a fin-shaped channel surrounded on three sides by a “gate” that controls the flow of electrons. But IBM’s nanosheet (or gate-all-around) transistors have a layered channel instead. The layers are stacked on top of each other and, like three pigs-in-a-blanket, are surrounded by the gate on all sides. This last bit is the most critical piece. Gate-all-around transistors provide better control of current through the channel, prevent leakage, and boost efficiency.
|
|
|
Post by Admin on May 12, 2021 18:30:03 GMT
UPDATED A perfect storm: Why graphics cards cost so much now It's complicated. www.pcworld.com/article/3612693/a-perfect-storm-why-graphics-cards-cost-so-much-now.htmlIt’s a bleak time to be a PC gamer. Nvidia’s new GeForce RTX 30-series and AMD’s new Radeon RX 6000-series graphics cards blaze new performance trails compared to last generation’s disappointing offerings—but most people have no chance of getting their hands on either, especially not at a sane price. New graphics card stock drops disappear in minutes, if not seconds, at online retailers, often at crazily high prices. Many of those cards reappear shortly thereafter on resale sites like Ebay and Craiglist for twice their suggested price, or more. Here’s a very tangible recent example. AMD’s Radeon RX 6700 XT launched at $480 in mid-March. We said that in a sane GPU market, the price was about $100 too high for the performance offered. Sapphire said it would charge $580—an additional $100 premium—for its fantastic, custom-designed Nitro+ variant. When the Nitro+ 6700 XT actually hit the streets at Newegg, however, it cost a whopping $730 and still sold out in no time. The card is currently going for over $1,000 on Ebay. Most people have had more success claiming a vaccine shot than a new GPU this year, unbelievably enough. So why do graphics cards cost so much right now? It’s more than just the scalpers and cryptocurrency geeks that everyone likes to blame. Let’s dig into this perfect (s***)storm.
|
|
|
Post by Admin on May 13, 2021 10:02:22 GMT
How 30 Lines of Code Blew Up a 27-Ton Generator A secret experiment in 2007 proved that hackers could devastate power grid equipment beyond repair—with a file no bigger than a gif. www.wired.com/story/how-30-lines-of-code-blew-up-27-ton-generator/EARLIER THIS WEEK, the US Department of Justice unsealed an indictment against a group of hackers known as Sandworm. The document charged six hackers working for Russia's GRU military intelligence agency with computer crimes related to half a decade of cyberattacks across the globe, from sabotaging the 2018 Winter Olympics in Korea to unleashing the most destructive malware in history in Ukraine. Among those acts of cyberwar was an unprecedented attack on Ukraine's power grid in 2016, one that appeared designed to not merely cause a blackout, but to inflict physical damage on electric equipment. And when one cybersecurity researcher named Mike Assante dug into the details of that attack, he recognized a grid-hacking idea invented not by Russian hackers, but by the United State government, and tested a decade earlier. The following excerpt from the book SANDWORM: A New Era of Cyberwar and the Hunt for the Kremlin's Most Dangerous Hackers, published in paperback this week, tells the story of that early, seminal grid-hacking experiment. The demonstration was led by Assante, the late, legendary industrial control systems security pioneer. It would come to be known as the Aurora Generator Test. Today, it still serves as a powerful warning of the potential physical-world effects of cyberattacks—and an eery premonition of Sandworm's attacks to come.
|
|
|
Post by Admin on May 14, 2021 0:03:19 GMT
AMD's 4700S Zen 2 PC Kit With Close Ties To Xbox Series X APU Gets Pictured Up Close hothardware.com/news/amd-4700s-zen-2-pc-kit-disabled-xbox-series-x-apu-picturedRemember AMD's mysterious 4700S Desktop Kit we wrote about a few weeks back? Speculation is that it is essentially a repurposed APU that was originally intended for Microsoft's Xbox Series X, perhaps because the silicon did not pass muster. Well, the partially disabled part has resurfaced, this time in a series of high resolution photos. The first appearance came in a Geekbench listing, which identified the part as having 8 cores and 16 threads, 12MB of L3 cache, and a 4GHz boost clock, which is 200MHz higher than the Xbox Series X and 400MHz higher than the Xbox Series S. It also showed up on a driver support page on AMD's website, which has since been removed.
|
|
|
Post by Admin on May 26, 2021 18:59:51 GMT
Build a RISC-V CPU From Scratch Use discrete logic chips to build a surprisingly capable CPU with the hottest new architecture By Filip Szkandera spectrum.ieee.org/geek-life/hands-on/build-a-riscv-cpu-from-scratchIt’s a certain kind of itch that drives people to voluntarily build their own CPU. We start thinking about the papered-over gap in our understanding, the one that lurks between how logic gates and flip-flops work individually and how machine code controls a fully assembled processor. What exactly happens in the magic zone where hardwired circuits start dancing to software’s ever-changing tune? It turns out this itch afflicts enough people that there are commercial kits for makers who want to put a CPU together to see (or hear) it tick, and the Web is littered with home-brewed 4-bit and 8-bit CPUs with architectures that would be familiar to an engineer from the 1970s. I should know—I made one myself. But then I began to wonder: Could I build my own CPU featuring some of the latest technology? Could I design my own fully compliant 32-bit RISC-V central processing unit? RISC-V is an open-source architecture that’s about 11 years old, and is now starting to make inroads in a world dominated by the x86 and ARM CPU architectures. I was alerted to the possibilities of RISC-V by the work of Robert Baruch, who started a similar project about two years ago but hasn’t yet completed his processor, in part because he had to keep redesigning components he’d built early on to meet the needs of an evolving design.
|
|
|
Post by Admin on Jun 2, 2021 18:11:40 GMT
|
|