Intel Widi App For Mac
Intel Widi App For Macbook
Streaming is available in most browsers,
/super-mario-odyssey-download-pc.html. and in the WWDC app.
Intel Widi App For Mac Windows 10
When Jobs announced the Intel transition, he also unveiled a new $999 developer kit: Macs with Pentium-based motherboards inside PowerMac G5 cases running a special version of Mac OS X.
Since there is no Miracast Mac, follow these steps for the simplest and quickest way to mirror your Android on your Mac screen: #1 The tools. Vysor is a great way to duplicate your Android screen onto your Mac's screen. All you need are three things: Vysor Chrome app-install it in Google Chrome. Apps can of course be built as native ARM code and run natively, but existing Intel apps continue to work on Apple Silicon Mac computers. The Rosetta translation will seamlessly run them. Arm64 is the primary topic for today, because that's the new CPU architecture that your apps will use, and not just your GUI code. The 1.1 version of WIDI Plus for Mac is available as a free download on our website. The actual developer of this free software for Mac is CME. The software belongs to System Tools. Our built-in antivirus scanned this Mac download and rated it as virus free.
Intel Pro Widi App Download
Your porting questions, answered: Learn how to recompile your macOS app for Apple silicon Macs and build universal apps that launch faster, have better performance, and support the future of the platform. We'll show you how Xcode makes it simple to build a universal macOS binary and go through running, debugging, and testing your app. Learn what changes to low-level code you might need to make, find out how to handle in-process and out-of-process plug-ins, and discover some useful tips for working with universal apps. We've designed this session for experienced macOS developers who want to get their existing apps running natively on Apple silicon Macs. You can learn more about doing so in the Apple silicon documentation. For more information on the transition to Apple silicon, watch 'Explore the new system architecture of Apple silicon Macs', 'Bring your Metal app to Apple silicon Macs', and 'Optimize Metal Performance for Apple silicon Macs'. And to learn how to run your iPhone and iPad apps on Mac, check out 'iPad and iPhone apps on Apple silicon Macs'.
Resources
Related Videos
WWDC 2020
WWDC 2019
WWDC 2018
WWDC 2016
WWDC 2013
- Download
Hello and welcome to WWDC.
Hello and welcome to Port Your Mac App to Apple Silicon. My name is Kuba Mracek and I will be guiding you Mac app developers through the journey of transitioning your apps to Apple Silicon. We will start with an overview of what the transition means for you and your apps. Then we'll discuss building universal Mac apps and what problems you might hit there. I will show running, debugging, and testing on Apple Silicon, and we will talk about implications for plug-ins. Finally I will mention a few tips for working with universal apps. So let's jump in. The Mac is transitioning to Apple Silicon. You're probably wondering what exactly that means for you and your apps. Let's start with the basics. The native CPU architecture on Apple Silicon Macs is called arm64, which will sound familiar if you're developing for iOS. iOS devices use the same architecture.
Apps can of course be built as native ARM code and run natively, but existing Intel apps continue to work on Apple Silicon Mac computers. The Rosetta translation will seamlessly run them. arm64 is the primary topic for today, because that's the new CPU architecture that your apps will use, and not just your GUI code. All user space programs will now be using arm64 as their CPU architecture. In this session we will cover how to use Xcode 12 to compile all your code to arm64 to get it to run natively.
So I have an Apple Silicon Mac here, and I'm going to launch Xcode and open an existing project called Solar System. Let's focus on this area here in the toolbar that contains the active run destination. It says My Mac, but if I open it it reveals more options. I can choose to run under Rosetta or to build a universal app for both Apple Silicon and Intel. Let's stick with building natively and let's just build and run the app. And there it is. Without any code changes or project setting tweaks, the app builds and runs natively with just a press of the Run button in Xcode.
In Activity Monitor, I can verify the process type. So under Solar System Mac, in the Kind column, it shows as Apple CPU architecture. That means it does run natively. Xcode 12 handles the necessary project settings for you. You don't have to change build settings to select CPU architectures.
And as we've just seen, for apps that don't have portability issues, building a native version is often as simple as clicking the Run button in Xcode.
So I recommend you just go ahead and try that. If your app works and runs correctly, your job is done. In case you do run into issues, I will be describing some common pitfalls later in the session. We're also publishing an extensive porting documentation at developer.apple.com/documentation and you can start with the page called Apple Silicon, and from there you can navigate to other articles that cover many of the topics from this session in greater detail. The documentation is an excellent resource that will answer many of your porting questions. Let's discuss some basic concepts around Mac apps. Your apps, and basically all executable code, are stored in a file format called Mach-O. These files can either be targeting a single CPU architecture — let's say 64-bit Intel — or they can be universal, that means support multiple CPU architectures. To examine a file on disk, you can use the lipo command, and I will be showing you how to do that later.
Starting this year, Mac apps should be built and distributed as universal apps, built for both Apple Silicon and 64-bit Intel CPU architectures.
If you have any existing Intel only apps, or if for some reason you can't start building your app natively right away, Apple Silicon computers have Rosetta, a translation environment that can seamlessly run these. In Rosetta, the entire process is always translated. You cannot load native code into a translated process or vice versa. You also cannot use Rosetta for kernel extensions, AVX vector instructions, or virtualization. Xcode fully supports building and running apps for Rosetta. Let's look at how that works. As we've noticed before, the run destination in Xcode allows you to target Rosetta. Pressing Run will build the code for Intel and then run it in Rosetta. In Activity Monitor, the app now shows as Intel CPU. So it is running in translation. All aspects of development from Xcode, including testing, debugging, and profiling are supported, and it's all translated on the fly. So if you perhaps happen to use the debugger to look at individual CPU instructions, you'll notice that they are Intel instructions. Don't be surprised by that. They're all being transparently translated by Rosetta. Let's now focus on getting your apps ported over to run natively on Apple Silicon Mac computers. And the first step here is building your apps as universal. Let me start by explaining why I think building your apps for Apple Silicon is actually going to be a very easy task. First of all, the endianness of arm64 is the same as x86. So if you remember the PowerPC to Intel transition, you will not have to deal with any byte order swapping problems this time.
Second, if you have any shared code with iOS apps, that code is almost certainly already fully ported to arm64, because iOS uses the same CPU architecture as Apple Silicon Mac computers. And third, you can use any Mac that you already have to build apps as universal. Xcode 12, the SDK, and the entire toolchain fully support cross compilation. That is, it's able to produce code for a different architecture than the one you're using Xcode on.
I'm going to demonstrate some possible porting issues on this little utility app I've been working on called Network Benchmarking. I have a copy of this app from last year, so it's Intel only, and we can see that in the Info dialog in Finder, it shows as 'Application (Intel)'. But as you can see, I can still run it, and it will run under Rosetta. The app lets me create a TCP server and a client, and I can start a benchmark, and while it's going it's measuring the data throughput. The app is also showing a bunch of system information like the current CPU and memory usage. And that's pretty much all it does.
Now let's actually open this app in Xcode 12, and let's start bringing it over to be a native app. Let me briefly describe the overall structure of this project. It has a few targets inside, like this first one which is the actual GUI of the app. And it's mostly high level AppKit code, so I have things like an app delegate and a view controller, etc. But then I also have a few more low level components, like this TimeTools package and some networking libraries, and I even include a plug-in. Generally you should expect that the high level AppKit code shouldn't have any portability problems.
But for the more low level parts, we might need to make some fixes. So let's start building. Step one is to verify that the app still builds correctly for Intel with Xcode 12. So let me select 'My Mac (Rosetta)' as the run destination, and hit Build. In my case the app will build fine. Generally this shouldn't produce any errors, but there is one case I wanted to point out.
Native page size is different between Intel machines and Apple Silicon systems.
If you use the PAGE_SIZE macro, it's no longer a compile time constant, so you might see a build failure because of that. The fix is very easy: either use PAGE_MAX_SIZE for a compile time upper bound, or vm_page_size to read the value at runtime dynamically. Also note that Rosetta will be fully matching the Intel environment: the CPU on Apple Silicon Macs will support 4 kB pages for translated processes. Now onto step two.
Switch to build for the native architecture, and let's resolve errors. In Xcode, I will open the run destination menu again, and select 'My Mac', and perform another build. This time I do get a compile time error. Let's look at it. It looks like that there's something wrong with this function called GetDefaultTimerClass which is using preprocessor macros — these #if lines — to switch behavior between different platforms. This is a pretty common pattern. It's called target conditionals. But it looks like I've made a mistake when I wrote this code. For this branch here, I wanted to target macOS and simulators, but instead my condition says __x86_64__ — that is, Intel CPUs — which used to be OK but it's no longer correct. The right way to use target conditionals here is to express them in the right terms. If I want to target macOS and simulators, I should use TARGET_OS_OSX or TARGET_OS_SIMULATOR. This problem is pretty common even in portable code.
You should use the semantic target conditionals based on what you actually want to express. Here's a table with some common target conditionals.
When you want to conditionally compile code for the Mac, use TARGET_OS_OSX or #if os(macOS) in Swift code. Don't assume a CPU architecture implies the platform.
When you want to conditionally compile code for Intel, use TARGET_CPU_X86_64 or #if arch(x86_64) in Swift. And similarly for the other cases. Don't assume that a platform or running in the simulator uses a specific CPU architecture.
Related to this is any usage of CPU specific code and assembly, either inline in your Objective-C code or standalone assembly files. All such code needs to be properly guarded with #ifs and the right target conditionals I just mentioned. If you only have Intel implementations for some functions like this, you will need to provide a second implementation for Apple Silicon. However in a lot of cases you can just rely on OS provided functionality instead. Use the Accelerate and Compression frameworks for high-performance optimized implementations of math functions and compression algorithms.
Now that the code is fixed, let's build it again.
This time I received a different build time issue. It's not a bug in any of the source code that I wrote, because the error message doesn't actually point to any source file. And if I click it, it only takes me to the build log. This time it's the linker that's emitting the error. That leads us to step three: resolving link-time issues. The problem here is that my app fails to link. Let me switch to Errors Only, so we can focus on what the linker is trying to tell me: 'Undefined symbols for architecture arm64', and I see the symbol name, but it doesn't look familiar to me. This is not a symbol or class name from my code. The important part of the error message is actually the yellow warning right above it. Let's look at it.
The linker says, 'ignoring file when building for macOS-arm64, but attempting to link with file built for macOS-x86_64.' What this error means is that I'm depending on a binary framework that's not universal. This output also provides details. It mentions the offending framework name, and it prints the symbol name — in this case it's an Objective-C class name — and also the referencing object file. In my app, it's the AppDelegate file that uses this class. This is useful to know because you might want to try to remove the dependency to make progress. So to do it in my app, I would go edit the AppDelegate source code. But again, in short, the problem here is that I have a pre-compiled binary framework in my project. Having pre-compiled binaries in your project will require some work. All static and dynamic libraries that you depend on will need to first be built universal before you can actually build your app. You'll need to contact the vendors of those binaries and ask them to provide universal builds of them that contain both the Intel code and Apple Silicon code. Then you can replace your old binaries with the universal ones, and the linker will be actually able to link your app. To make incremental progress, you can consider temporarily removing that binary dependency, but don't forget to put it back once you have a universal build of that library. What I also recommend doing right away is to scan your source code and do a search for anything that's not being built from source code, but is pre-compiled by someone else. Typically that's files and bundles with extensions of .a, .dylib, .framework, and .xcframework. And you can use the lipo -info command to inspect any binary and see whether it's already universal or not. So let's do that on our demo project. I will open the project directory in Finder, and I will also open Terminal.
Then I will drag Sparkle.framework into Terminal to get its full file path, and use the lipo -info command on the binary inside the framework.
And once I run this command, lipo will say that it is really an Intel-only binary.
So what I need to do is update the version of this framework with a new one that's universal. Here you would typically reach out to the vendor and ask for a new version or go to the vendor's web site and download a universal binary if it's published already. But I have a new universal version of this framework already. My teammate was actually able to provide it for me. Because the framework is open source, and building it universal was literally just a matter of rebuilding with Xcode 12, we did not hit any portability issues at all. Let me replace the old version with the new one and let's inspect this new version of the framework.
Indeed it is universal now. lipo says it's now a fat file that contains both the x86_64 code as well as arm64 code. And if I build my app in Xcode one more time this time it will build successfully. Step 4. There is no step 4! I now have a native build of my app.
In this example project, we've seen a few typical problems when building universal apps. Please see the documentation on the Apple Silicon page on developer.apple.com which covers many of the porting problems that we've run into. If you prefer building on command line with xcodebuild, it's important to select the right destination for building. Use the -showdestinations option to get a list of available run destinations.
And then you can use the -destination flag to select one. Here's two examples how to target either arm64 or x86_64. So let me just summarize the steps that we went through to be able to build our app as a native app. First, we verified that the app still builds correctly for Intel CPUs.
Then we switched to build natively and we fixed code issues. This is the case where the code itself was not correctly portable. And third, we fixed link-time issues, which are typically caused by having binary dependencies.
And that's it. As a reminder, building universal apps works on any Mac. You can use your Intel Mac computer for that too. Now that we have our app building, let's move on and talk about running and debugging your native build of your app. I know what you're thinking: the app builds, so let's ship it right? Well, no. Your native app still definitely needs runtime testing.
Don't assume that if the app builds that it's going to work flawlessly.
For this part you will need an Apple Silicon Mac. Things like debugging, running unit tests, profiling, using sanitizers, all need to run native code, and that can only happen on Apple Silicon. Please plan to do full runtime testing, validation, and profiling of both CPU architectures before you ship your app. Let's do some runtime testing on our demo project. Now that the app builds I can finally run it natively. It seems to launch fine: the UI shows up, and the system information table seems to be accurate.
Let's try to start the client server data transfer. Notice that there is some problem with the transfer statistics and the progress bar. It seems to refresh with some wrong frequency. It's supposed to update every one hundred milliseconds, but instead the progress bar just makes these weird large jumps. So let's look into this problem. Based on the benchmark values, I think the benchmark is actually running fine. It's just the UI that's not refreshing fast enough. So let me set a breakpoint into my view controller into a method called updateClientProgressUI and let's run the benchmark again.
We will eventually hit this breakpoint, but notice that it takes a while — more than a couple of seconds. So I think that there is something wrong with the timer that's calling this method. Let's look a few stack frames below, where I have an implementation of this custom timer. And it seems to be measuring a monotonic timestamp using an API called mach_absolute_time.
So I think that's the source of the problem: whoever wrote this code assumed that mach_absolute_time is always returning values in nanoseconds. That's why the code right below it is dividing by a thousand three times to convert it into seconds. However that's not correct. I can option-click the API to peek into the documentation for it, and it says that it returns values in tick units.
So, assuming these tick units are one nanosecond is incorrect. The time base for this API is different on different CPU architectures. Probably the best solution here would be to just avoid writing custom timers using low level APIs. Foundation and GCD have implementations of timers that often work better.
But if you really need to use timestamps to measure time we can either query the time base dynamically or switch to another API. There's a variant of the clock_gettime API that always returns the timestamp in nanoseconds.
So let's use that in our demo. The documentation pop-up conveniently mentions this other API, so I can copy it right from here.
And let's run the app one more time and let me also disable the breakpoint I've put in earlier. This time if I run the benchmark, the progress bar will update fluently. This problem is actually quite common, and the typical effect of this bug is that something is running roughly 40 times slower or an event is occurring 40 times less often than it should. If you observe something like this, search your code base for mach_absolute_time. The best way to test software is of course with automation. In this project, I have most of the code covered with unit tests, so let's run them.
Luckily all my unit tests are passing. But notice that the entire test suite only ran once. In this case, in the run destination menu, I have the native mode selected. So all my tests and the code under test were run in native mode. If I want to run the tests as Intel code under Rosetta, I have to switch the run destination and rerun my tests.
In my project they all appear to pass for both architectures. But if you see a test failing only under one of the two destinations, that again indicates some non-portable code. Similarly to building on the command line, when testing with xcodebuild, think about the destination that your tests are using.
If you want to run your tests as native, specify arch=arm64 inside the -destination flag. This also is the default, so if you don't use any destination, your tests will run as native code. To run your tests as Intel code under Rosetta, use arch=x86_64. You might also want to set up your CI systems to run tests in both these modes to catch any non-portable code causing test failures. Let's look at one particular test that I have because it's a performance benchmark. I'll select the native destination again and run the test, which is just measuring how long it takes to transfer 10 megabytes of data from a local server.
The result is that it takes 21 milliseconds. If I switch to Rosetta, and run the test again, we'll get a different result.
This time it takes 29 milliseconds, which is slower than native, but the translated code still performs very well and the result is very close to native execution.
When profiling and benchmarking on Apple Silicon Mac computers, native code generally has the best performance, and you should aim at optimizing native execution. If you see a performance degradation in native mode, watch out for any Intel-specific code optimizations that you might have, for example optimized assembly using SSE or AVX. To achieve best performance, you might need to provide a matching implementation for Apple Silicon using the ARM instruction set. But in general, you should try using Apple provided APIs whenever possible, like Accelerate.framework, because those will provide high-performance implementations for all supported CPUs.
Apple Silicon Macs use asymmetric CPU cores and that might have effect on your high-performance code. There's two types of CPU cores: high-performance, also called P cores, and energy-efficient, called E cores. All of them can be active at the same time for maximum performance on parallel workloads.
In Instruments, this distinction is visible in the CPUs view of the timeline.
The individual CPU cores in the list have labels telling you which ones are efficient and which ones are performance cores. Don't be surprised if you see a graph like this where almost all the work is scheduled only on the P cores or only on the E cores. The system is dynamically choosing the best way to perform work to make the right performance and battery life tradeoff. One important pattern to avoid on asymmetric CPU systems is using spinlocks and busy-waiting code. In this example the first function is using a spinlock and the second example is busy-waiting. That means actively spending CPU time to check in a loop whether a new job is available in a queue. This is something that's generally wrong to do on any computer system. A much better alternative for the first function is to use some blocking type of lock — for example, os_unfair_lock — and instead of checking a condition in a spinning loop, use condition variables, which will block until the condition is met. On Apple Silicon Macs, busy-waiting can have the effect of pointlessly occupying the P cores, causing an overall delay in completion of the entire work. You should prefer a synchronization primitives that block when they can't make progress, NSLock, os_unfair_lock, pthread mutexes, are all examples of blocking locks. NSCondition and pthread condition variables provide a way to wait until a specific condition occurs.
In general, use GCD when possible, and avoid splitting your work based on number of available CPUs. Instead, prefer splitting tasks into smaller sized units. For details about GCD, please watch a video from WWDC 2017. And if you're using threading for audio processing, the Meet Audio Workgroups session from this year will help you fine-tune your audio apps and plug-ins. So when you're debugging, testing, and profiling your app, remember that you can build on any Mac, but running native arm64 code needs an Apple Silicon Mac. When running your tests, try to run them under both native and translated modes to discover non-portable code in your tests. And watch out for any Intel-specific code optimizations and any code that is using busy-waiting instead of blocking. The next topic I'd like to talk about is plug-ins, because these need some special considerations on Apple Silicon computers. Plug-ins are a way to dynamically load and execute code. Using plug-ins that just implement standard system extension points via NSExtension is going to just work: both native and translated plug-ins are supported. However, if your app is a plug-in host, and if it's using a custom plug-in loading mechanism, you will need to consider how plug-ins work on Apple Silicon Macs. Your app's process at runtime will contain the code that you wrote and also code from system frameworks. If your app supports plug-ins, it will typically discover them at runtime and then load them when needed.
That's called an in-process plug-in model, and typically the app uses a call to dlopen or Bundle.load for this.
Alternatively, the plug-ins can be spawned as new processes, and we call those out-of-process plug-ins. The app and the plug-in process then use some interprocess communication mechanism like XPC. Loading another plug-in typically spawns another process. While out-of-process plug-ins generally provide better security and stability, in-process plug-ins are still very common.
If your app is using in-process plug-ins, you need to consider that on macOS on Apple Silicon, all code in one process must always have the same CPU architecture. Let's look at typical use cases for plug-ins. As I've mentioned, there's out-of-process and in-process plug-ins. What's also important is whether the plug-in is first party, which means you are building it from source code and ship it inside your app, or whether it's a third party plug-in that some other software vendor distributes in pre-compiled binary form. If your plug-ins are first party and you can rebuild them, make sure to build them as universal plug-ins, and everything will work. For third party plug-ins using the out-of-process plug-in model, you can make some small changes in your loading code to support both CPU architectures.
If you make sure to load the plug-in executable as the right CPU architecture, your app will be able to load both native and translated plug-ins. In-process third party plug-ins have some restrictions. Native apps can only load native plug-ins and Rosetta translated apps can only load Intel-based plug-ins.
If your plug-in vendor hasn't updated the plug-in to be universal, it might affect your users. Let's look at what happens if we have a plug-in CPU mismatch. In my network benchmarking app, I am using a plug-in to implement TLS support. And it gets loaded the first time I switch to the TLS mode, so let me do that. The plug-in loading fails, so let's inspect the console. The dynamic linker explains what happened. We tried to use dlopen to load a plug-in with this name, and the file was found on disk, but it has a wrong CPU architecture. Let's verify this. I'll copy the full path of this file and open Terminal, and I will paste the path into Terminal and use lipo -info again. And indeed this binary is Intel only.
Now if this was a third party precompiled plug-in, the right solution would be to contact that third party and urge them to publish a universal build of the plug-in as soon as possible. But let's go back into Xcode because in this case I do have the source code of this plug-in and I should be able to rebuild it as universal.
Let's go into my project, and let's open the TLS plug-in target, and let's /volvo-tech-tool-download.html. inspect its Build Settings. It looks like I have some unusual configuration here.
I am forcing Architectures to always be x86_64. Now this is probably some leftover from when I wanted to drop 32-bit support, but hard-coding x86_64 is incorrect now. We should instead be using Standard Architectures everywhere.
But an even better option is just to avoid setting Architectures altogether.
So I will just select the build setting and hit the Delete key. This way Xcode will use the correct default settings. Let's build and run the app again, and this time if I switch to the TLS mode, the plug-in will get loaded correctly now. A typical pattern to load a plug-in is to use dlopen with a file path. It is very important to always check the return value from dlopen, and if it's NULL, the full error message explaining what is wrong is provided by calling dlerror. When running this code with a mismatching plug-in CPU architecture, you will get an error explaining what file on disk has a wrong architecture. That indicates that the plug-in is not universal. I'd like to point out that the out-of-process plug-in model doesn't have this restriction. If you're using in-process plug-ins, I encourage you to explore XPC as a solution. You can have one process per plug-in, but also one process per CPU architecture. In this case, you'll only ever use up to two extra processes. Out-of-process plug-ins provide better stability and security for your app, and for details about XPC I recommend watching a WWDC session from 2013 called Efficient Design with XPC. If your users are stuck with an Intel-only plug-in, they can still force the app to run in Rosetta, which will allow loading those plug-ins.
For universal apps, the Info dialog in Finder has a checkbox called Open using Rosetta to do that. This checkbox can be disallowed with an Info.plist key. Details are explained in the Apple Silicon documentation so please refer to it if you'd like to know more. Let me finally share a few tips when working on apps that run on both Intel-based Macs and Apple Silicon Macs.
Once your app is building, running, and it's tested and verified to work correctly on each architecture, you're going to start distributing your app, either on the App Store or perhaps by a download link on your web site, in which case you'll need your entire software package to be notarized.
Please watch a WWDC session from 2019 called All About Notarization for details.
In Xcode, your distribution process starts with an archive build, which now by default produces universal apps. In Xcode, the Archive action is available in the Product menu. If your primary development workflow is to use only one type of Mac computer, doing an archive build might actually be the first time that your code builds universally. So if your debug iterations build fine, but your archive build fails, that indicates you have some portability issues that we've covered earlier in the session. Once your archive build succeeds, you can find your package for distribution in the Organizer window of Xcode. In Xcode 12, it now shows which CPU architectures are included in your archive builds. The Organizer also provides access to crash statistics and individual crash logs from your users of your app. For details about crash logs and how to investigate the root causes of crashes, please watch a video from WWDC 2018 called Understanding Crashes and Crash Logs. With universal apps you should now watch out that you'll be receiving three different kinds of crash logs: x86 crash logs from Intel Macs, native arm64 crash logs for Apple Silicon Macs, but also translated x86 crash logs when running under Rosetta on Apple Silicon Macs. If you select a particular crash log in the Organizer, the Details section now shows what CPU architecture the crash originated on and whether the process was running translated or not. If your app relies on a kernel extension, or if you provide drivers for hardware using DriverKit, there are some special considerations that you should understand when porting these types of software. In particular, more types of kernel extensions are now deprecated or disallowed on Apple Silicon Macs, and you'll have to use DriverKit for those instead. Please refer to the macOS porting documentation for details. One more thing to watch out for is multithreading bugs.
Intel CPUs and Apple Silicon implement a different memory ordering model.
For correct multithreaded code, this doesn't matter. It will always run correctly.
But bugs like race conditions and data races might have slightly different behavior on each of these architectures. Specifically, a data race on Intel CPUs might have worked fine and appeared to be benign, but the same bug could be causing crashes on Apple Silicon instead. Note that running under Rosetta fully supports the Intel memory ordering, so you will not see a difference in behavior there. Whether you do see crashes in multithreaded code or not, I highly recommend using Thread Sanitizer as a tool to detect and prevent data races. We have presented this tool in WWDC session in 2016 called Thread Sanitizer and Static Analysis. You can enable Thread Sanitizer in the scheme editor of your project. In the Diagnostics tab, you can find a checkbox called Thread Sanitizer. But remember this is a runtime tool. You have to run your app and exercise code at the run time to find bugs. So this concludes the tips that I had for working with universal apps. Let me now wrap up the session and let's summarize what we have talked about today.
Mac apps should from now on be built as universal apps, and you can start bringing them to Apple Silicon today. Scan your source code for non-universal binaries. You will need to get them updated as universal to run your app natively. Use Xcode 12 to build your app universal, identify and fix any portability issues you discover on the way, and don't forget to run and test the native build of your apps. Watch out for any of the platform differences that we have discussed today.
Finally, I would like to remind you that developer.apple.com/documentation contains many more details. The article called Apple Silicon provides guidance and answers to questions you might have about porting your app. Thank you for watching.