question
stringlengths
11
28.2k
answer
stringlengths
26
27.7k
tag
stringclasses
130 values
question_id
int64
935
78.4M
score
int64
10
5.49k
Are there any tools for windows like that *nix world has? I am looking for something like Chef or Puppet. I have found cfEngine but it still looks very *nix centric. Ideally it would be open source, and command line driven. The idea is to put together an automated infrastructure with windows based servers. Our current IT department does not allow non-windows servers.
Chef is supported on Windows by Opscode. While we don't run Windows for any of our infrastructure, we do have developers who are continually improving our Windows support. We also get community contributions, and most of the early phase Windows functionality for Chef was contributed by the community. Important: Opscode now provides an MSI installer for Chef on Windows. This makes it easier than ever to get Chef and Ruby installed on Windows. While we have a lot of Unix/Linux background across our teams, our intention is that Windows is treated as a first class citizen. 2012 will be a big year for Chef and Windows. Keep an eye on the Opscode blog for announcements. The following Chef Resources work on Windows: Environment Resource: sets windows environment variables User Group Mount File Gem Package Remote File Cookbook File Template Service Ruby Block Execute That is, these are resources included in Chef itself. As Chef is extensible with cookbooks, many more resources are added through a variety of Windows specific cookbooks. Read on for more information. You can get started with using Chef and Windows here: http://wiki.opscode.com/display/chef/Fast+Start+Guide+for+Windows Originally, Doug MacEchern wrote some cookbooks to do a number of things to automate windows, too. https://github.com/dougm/site-cookbooks/tree/master/windows This information and more available on the Chef Wiki: http://wiki.opscode.com/display/chef/Installation+on+Windows Update The following cookbook adds new resources to Chef to manage Windows: http://community.opscode.com/cookbooks/windows It is an update/rewrite of Doug's fine resources from his repository linked above. Documentation is available on the Chef Wiki. The following cookbook deploys PowerShell and provides a resource to run PowerShell commands/scripts directly in Chef Recipes: http://community.opscode.com/cookbooks/powershell Documentation is available in the README.md included in the cookbook tarball. Additional cookbooks for installing 7-zip, managing IIS and SQL Server have been added. Our "database" cookbook has been extended with a resource/provider for managing SQL Server databases and users (with the tds rubygem). The knife-windows plugin for knife adds functionality for interacting with Windows systems to provision them with Chef installation. Update: We have now added File ACL support for Windows to Chef, for all the usual file/directory suspects.
CFEngine
4,910,034
70
I'm in the process of evaluating if and how a CF .NET enterprise application can be ported to run on Android devices. The application on Windows Mobile phones are run in kiosk mode where the application autostart in fullscreen-mode after booting and with the users unable to accidentally or willingly access any other parts of the phone. Is it possible on Android to have only one application autostart after booting and prevent users from accidentally (or willingly) access any other parts of the Android device?
You can autostart applications on boot by listening to the android.intent.action.BOOT_COMPLETED intent in a BroadcastReceiver and start your Activity from there. In the Activity you can register yourself as the new default homescreen[1] and handle the keys. I think there are some instances that you can't handle without modifying the framework (like longpress on Home to show currently active Applications) - I could also be mistaken though. But for a prototype that could be sufficient. Have fun tinkering! [1]: <intent-filter> <action android:name="android.intent.action.MAIN" /> <category android:name="android.intent.category.HOME" /> <category android:name="android.intent.category.DEFAULT" /> </intent-filter>
kiosk
2,068,084
121
I need to disable Home and other system buttons in my Android application. Example: MX Player (see at Google Play) - you can press "lock" icon at player screen and it locks all hardware and software system buttons. It works fine WITHOUT ROOTING. I tested it on some devices with different Android versions. I tried to disassemble Kids Lock (plugin) but has no clue how it works yet. I need same solution like the Kids Lock (plugin) for MX Player: - disable Home, Back and all other system buttons. Any suggestions?
First of, please think long and hard if you really want to disable the Home button or any other button for that matter (e.g. the Back button), this is not something that should be done (at least most of the times, this is a bad design). I can speak only for myself, but if I downloaded an app that doesn't let me do something like clicking an OS button, the next thing I do is uninstall that app and leave a very bad review. I also believe that your app will not be featured on the App Store. Now... Notice that MX Player is asking permission to draw on top of other applications: Since you cannot override the Home button on Android device (at least no in the latest OS versions). MX Player draws itself on top of your launcher when you "lock" the app and clicks on the Home button. To see an example of that is a bit more simple and straight forward to understand, you can see the Facebook Messenger App. As I was asked to provide some more info about MX Player Status Bar and Navigation Bar "overriding", I'm editing my answer to include these topics too. First thing first, MX Player is using Immersive Full-Screen Mode (DevBytes Video) on KitKat. Android 4.4 (API Level 19) introduces a new SYSTEM_UI_FLAG_IMMERSIVE flag for setSystemUiVisibility() that lets your app go truly "full screen." This flag, when combined with the SYSTEM_UI_FLAG_HIDE_NAVIGATION and SYSTEM_UI_FLAG_FULLSCREEN flags, hides the navigation and status bars and lets your app capture all touch events on the screen. When immersive full-screen mode is enabled, your activity continues to receive all touch events. The user can reveal the system bars with an inward swipe along the region where the system bars normally appear. This clears the SYSTEM_UI_FLAG_HIDE_NAVIGATION flag (and the SYSTEM_UI_FLAG_FULLSCREEN flag, if applied) so the system bars become visible. This also triggers your View.OnSystemUiVisibilityChangeListener, if set. However, if you'd like the system bars to automatically hide again after a few moments, you can instead use the SYSTEM_UI_FLAG_IMMERSIVE_STICKY flag. Note that the "sticky" version of the flag doesn't trigger any listeners, as system bars temporarily shown in this mode are in a transient state. Second: Hiding the Status Bar Third: Hiding the Navigation Bar Please note that although using immersive full screen is only for KitKat, hiding the Status Bar and Navigation Bar is not only for KitKat. I don't have much to say about the 2nd and 3rd, You get the idea I believe, it's a fast read in any case. Just make sure you pay close attention to View.OnSystemUiVisibilityChangeListener. I added a Gist that explains what I meant, it's not complete and needs some fixing but you'll get the idea. https://gist.github.com/Epsiloni/8303531
kiosk
17,549,478
82
We are using Chrome in kiosk mode and accidentally users are causing the application to zoom with the recent addition of pinch zoom support. They then think they've broken it and simply walk away leaving the application (and subsequently a 55" touch screen) in a broken state. Now the only thing to work has been stopping event propagation for touch events over 2 points. Issues with that are we can't do multitouch apps in that case and if you act fast the browser reacts before javascript. Which in our tests still happen on accident by users. I've done the Meta tags, they do not work. Honestly I wish I could disable chrome zooming at all but I cant find a way to do that. How can I stop the browser from zooming?
We've had a similar problem, it manifests as the browser zooming but javascript receiving no touch event (or sometimes just a single point before zooming starts). We've found these possible (but possibly not long-term) solutions: 1. Disable the pinch / swipe features when using kiosk mode If these command-line settings remain in Chrome, you can do the following: chrome.exe --kiosk --incognito --disable-pinch --overscroll-history-navigation=0 --disable-pinch - disables the pinch-to-zoom functionality --overscroll-history-navigation=0 - disables the swipe-to-navigate functionality 2. Disable pinch zoom using the Chrome flags chrome://flags/#enable-pinch Navigate to the URL chrome://flags/#enable-pinch in your browser and disable the feature. The pinch zoom feature is currently experimental but turned on by default which probably means it will be force-enabled in future versions. If you're in kiosk mode (and control the hardware/software) you could probably toggle this setting upon installation and then prevent Chrome updates going forward. There is already a roadmap ticket for removing this setting at Chromium Issue 304869. The fact that the browser reacts before javascript can prevent it is definitely a bug and has been logged at the Chromium bug tracker. Hopefully it will be fixed before the feature is permanently enabled or fingers-crossed they'll leave it as a setting. 3. Disable all touches, whitelist for elements and events matching your app In all tests that we've conducted, adding preventDefault() to the document stops the zooming (and all other swipe/touch events) in Chrome: document.addEventListener('touchstart', function(event){ event.preventDefault(); }, {passive: false}); If you attach your touch-based functionality higher up in the DOM, it'll activate before it bubbles to the document's preventDefault() call. In Chrome it is also important to include the eventListenerOptions parameter because as of Chrome 51 a document-level event listener is set to {passive: true} by default. This disables normal browser features like swipe to scroll though, you would probably have to implement those yourself. If it's a full-screen, non-scrollable kiosk app, maybe these features won't be important.
kiosk
22,999,829
53
I am implementing a kiosk mode application and i have successfully made the application full-screen without status bar appearance post 4.3 but unable to hide status bar in 4.3 and 4.4 as status-bar appears when we swipe down at the top of the screen. I have tried to make it full screen by speciflying the full screen theme in manifest setting window Flags ie setFlags setSystemUiVisibility Possible duplicate but no concrete solution found Permanently hide Android Status Bar Finally the thing i want is, how to hide status bar permanently in an activity?? in android 4.3,4.4,5,6versions
We could not prevent the status appearing in full screen mode in kitkat devices, so made a hack which still suits the requirement ie block the status bar from expanding. For that to work, the app was not made full screen. We put a overlay over status bar and consumed all input events. It prevented the status from expanding. note: customViewGroup is custom class which extends any layout(frame,relative layout etc) and consumes touch event. to consume touch event override the onInterceptTouchEvent method of the view group and return true Updated <uses-permission android:name="android.permission.SYSTEM_ALERT_WINDOW"/> customViewGroup implementation Code : WindowManager manager = ((WindowManager) getApplicationContext() .getSystemService(Context.WINDOW_SERVICE)); WindowManager.LayoutParams localLayoutParams = new WindowManager.LayoutParams(); localLayoutParams.type = WindowManager.LayoutParams.TYPE_SYSTEM_ERROR; localLayoutParams.gravity = Gravity.TOP; localLayoutParams.flags = WindowManager.LayoutParams.FLAG_NOT_FOCUSABLE| // this is to enable the notification to recieve touch events WindowManager.LayoutParams.FLAG_NOT_TOUCH_MODAL | // Draws over status bar WindowManager.LayoutParams.FLAG_LAYOUT_IN_SCREEN; localLayoutParams.width = WindowManager.LayoutParams.MATCH_PARENT; localLayoutParams.height = (int) (50 * getResources() .getDisplayMetrics().scaledDensity); localLayoutParams.format = PixelFormat.TRANSPARENT; customViewGroup view = new customViewGroup(this); manager.addView(view, localLayoutParams);
kiosk
25,284,233
31
I am developing a kiosk and now in admin side. In order to go to the Admin, the user needs to tap the screen 5 times just in 3 seconds or else, nothing will happen.
Please read the comments in the code, it is quite straightforward import android.app.Activity; import android.os.Bundle; import android.view.MotionEvent; public class MainActivity extends Activity { private int tapCount = 0; private long tapCounterStartMillis = 0; //detect any touch event in the screen (instead of an specific view) @Override public boolean onTouchEvent(MotionEvent event) { int eventaction = event.getAction(); if (eventaction == MotionEvent.ACTION_UP) { //get system current milliseconds long time= System.currentTimeMillis(); //if it is the first time, or if it has been more than 3 seconds since the first tap ( so it is like a new try), we reset everything if (tapCounterStartMillis == 0 || (time-tapCounterStartMillis > 3000) ) { tapCounterStartMillis = time; tapCount = 1; } //it is not the first, and it has been less than 3 seconds since the first else{ // time-tapCounterStartMillis < 3000 tapCount ++; } if (tapCount == 5) { //do whatever you need } return true; } return false; }
kiosk
21,104,263
26
We are looking to print to a POS printer connected where apache is running. Due to design of the application, and deployment, printing should be done from Server (it should detect the order and send to different printers and different formats of printing...bill, kitchen orders, and so on...). For this reason and others (like access application from an iPad for example) we discard options like QZ-Print applet and needst o print directly server side. We searched a lot, and found that there are an extension called php-printer but seems outdated, and just works under WIndows. We followed this code: (http://mocopat.wordpress.com/2012/01/18/php-direct-printing-printer-dot-matrix-lx-300/) $tmpdir = sys_get_temp_dir(); # ambil direktori temporary untuk simpan file. $file = tempnam($tmpdir, 'ctk'); # nama file temporary yang akan dicetak $handle = fopen($file, 'w'); $condensed = Chr(27) . Chr(33) . Chr(4); $bold1 = Chr(27) . Chr(69); $bold0 = Chr(27) . Chr(70); $initialized = chr(27).chr(64); $condensed1 = chr(15); $condensed0 = chr(18); $corte = Chr(27) . Chr(109); $Data = $initialized; $Data .= $condensed1; $Data .= "==========================\n"; $Data .= "| ".$bold1."OFIDZ MAJEZTY".$bold0." |\n"; $Data .= "==========================\n"; $Data .= "Ofidz Majezty is here\n"; $Data .= "We Love PHP Indonesia\n"; $Data .= "We Love PHP Indonesia\n"; $Data .= "We Love PHP Indonesia\n"; $Data .= "We Love PHP Indonesia\n"; $Data .= "We Love PHP Indonesia\n"; $Data .= "--------------------------\n"; $Data .= $corte; fwrite($handle, $Data); fclose($handle); copy($file, "//localhost/KoTickets"); # Lakukan cetak unlink($file); And it works, but this sends plain text, and we need to send image (logo), and format a more cute bill. We tried creating a PDF and "sending" to the printer in the same way, but just prints blank. I found a library to work with network printers (escpos-php on github), but we need to work with USB printers too, to avoid our customers to change hardware. Some ideas how to achieve this? Thanks in advance.
Author of escpos-php here. If your printers do support ESC/POS (most thermal receipt printers seem to use some sub-set of it), then I think the driver will accommodate your use case: USB or network printing, logo, some formatting. Some of these are quite recent additions. USB printing escpos-php prints to a file pointer. On Linux, you can make the USB printer visible as a a file using the usblp driver, and then just fopen() it (USB receipt example, blog post about installing a USB printer on Linux). So printing "Hello world" on a USB printer is only slightly different to printing to a networked printer: <?php require __DIR__ . '/vendor/autoload.php'; use Mike42\Escpos\PrintConnectors\FilePrintConnector; use Mike42\Escpos\Printer; $connector = new FilePrintConnector("/dev/usb/lp0"); $printer = new Printer($connector); $printer -> text("Hello World!\n"); $printer -> cut(); $printer -> close(); Or, more like the code you are currently using successfully, you could write to a temp file and copy it: <?php require __DIR__ . '/vendor/autoload.php'; use Mike42\Escpos\PrintConnectors\FilePrintConnector; use Mike42\Escpos\Printer; /* Open file */ $tmpdir = sys_get_temp_dir(); $file = tempnam($tmpdir, 'ctk'); /* Do some printing */ $connector = new FilePrintConnector($file); $printer = new Printer($connector); $printer -> text("Hello World!\n"); $printer -> cut(); $printer -> close(); /* Copy it over to the printer */ copy($file, "//localhost/KoTickets"); unlink($file); So in your POS system, you would need a function which returns a file pointer based on your customer configuration and preferred destination. Receipt printers respond quite quickly, but if you have a few iPads making orders, you should wrap operations to each printer with a file lock (flock()) to avoid concurrency-related trouble. Also note that USB support on Windows is un-tested. Logo & Formatting Once you have figured out how you plan to talk to the printer, you can use the full suite of formatting and image commands. A logo can be printed from a PNG file like so: use Mike42\Escpos\EscposImage; $logo = EscposImage::load("foo.png"); $printer -> graphics($logo); And for formatting, the README.md and the example below should get you started. For most receipts, you only really need: selectPrintMode() to alter font sizes. setEmphasis() to toggle bold. setJustification() to left-align or center some text or images. cut() after each receipt. I would also suggest that where you are currently using an example that draws boxes like this: ========= | | ========= You could make use of the characters in IBM Code page 437 which are designed for drawing boxes that are supported by many printers- just include characters 0xB3 to 0xDA in the output. They aren't perfect, but it looks a lot less "text"-y. $box = "\xda".str_repeat("\xc4", 10)."\xbf\n"; $box .= "\xb3".str_repeat(" ", 10)."\xb3\n"; $box .= "\xc0".str_repeat("\xc4", 10)."\xd9\n"; $printer -> textRaw($box); Full example The below example is also now included with the driver. I think it looks like a fairly typical store receipt, formatting-wise, and could be easily adapted to your kitchen scenario. Scanned output: PHP source code to generate it: <?php require __DIR__ . '/vendor/autoload.php'; use Mike42\Escpos\Printer; use Mike42\Escpos\EscposImage; use Mike42\Escpos\PrintConnectors\FilePrintConnector; /* Open the printer; this will change depending on how it is connected */ $connector = new FilePrintConnector("/dev/usb/lp0"); $printer = new Printer($connector); /* Information for the receipt */ $items = array( new item("Example item #1", "4.00"), new item("Another thing", "3.50"), new item("Something else", "1.00"), new item("A final item", "4.45"), ); $subtotal = new item('Subtotal', '12.95'); $tax = new item('A local tax', '1.30'); $total = new item('Total', '14.25', true); /* Date is kept the same for testing */ // $date = date('l jS \of F Y h:i:s A'); $date = "Monday 6th of April 2015 02:56:25 PM"; /* Start the printer */ $logo = EscposImage::load("resources/escpos-php.png", false); $printer = new Printer($connector); /* Print top logo */ $printer -> setJustification(Printer::JUSTIFY_CENTER); $printer -> graphics($logo); /* Name of shop */ $printer -> selectPrintMode(Printer::MODE_DOUBLE_WIDTH); $printer -> text("ExampleMart Ltd.\n"); $printer -> selectPrintMode(); $printer -> text("Shop No. 42.\n"); $printer -> feed(); /* Title of receipt */ $printer -> setEmphasis(true); $printer -> text("SALES INVOICE\n"); $printer -> setEmphasis(false); /* Items */ $printer -> setJustification(Printer::JUSTIFY_LEFT); $printer -> setEmphasis(true); $printer -> text(new item('', '$')); $printer -> setEmphasis(false); foreach ($items as $item) { $printer -> text($item); } $printer -> setEmphasis(true); $printer -> text($subtotal); $printer -> setEmphasis(false); $printer -> feed(); /* Tax and total */ $printer -> text($tax); $printer -> selectPrintMode(Printer::MODE_DOUBLE_WIDTH); $printer -> text($total); $printer -> selectPrintMode(); /* Footer */ $printer -> feed(2); $printer -> setJustification(Printer::JUSTIFY_CENTER); $printer -> text("Thank you for shopping at ExampleMart\n"); $printer -> text("For trading hours, please visit example.com\n"); $printer -> feed(2); $printer -> text($date . "\n"); /* Cut the receipt and open the cash drawer */ $printer -> cut(); $printer -> pulse(); $printer -> close(); /* A wrapper to do organise item names & prices into columns */ class item { private $name; private $price; private $dollarSign; public function __construct($name = '', $price = '', $dollarSign = false) { $this -> name = $name; $this -> price = $price; $this -> dollarSign = $dollarSign; } public function __toString() { $rightCols = 10; $leftCols = 38; if ($this -> dollarSign) { $leftCols = $leftCols / 2 - $rightCols / 2; } $left = str_pad($this -> name, $leftCols) ; $sign = ($this -> dollarSign ? '$ ' : ''); $right = str_pad($sign . $this -> price, $rightCols, ' ', STR_PAD_LEFT); return "$left$right\n"; } }
kiosk
25,973,046
24
I am modifying the AOSP source code because my app needs to run in a kiosk environment. I want Android to boot directly into the app. I've excluded launcher2 from generic_no_telephony.mk, and added the app there. Now Android prompts me all the time to choose default launcher. The two choices that are available on the pop-up: Home Sample My app. How can I exclude the Android Home Sample Launcher? Or is there another way to set the default launcher in an AOSP build?
Instead of modifying the AOSP make files (which is annoying because then you need to track your changes) it is easier to add a LOCAL_OVERRIDES_PACKAGES line to your app's make file. For instance: LOCAL_OVERRIDES_PACKAGES := Launcher2 Launcher3 added to your Android.mk file will ensure that those packages are not added to any build where this package is added. Following that, you should do a make installclean and then start your build the same way you always make your build. The make installclean is important to remove the packages that are left behind by the previous build. I also just found a nice answer to how to do this in another question, see: How would I make an embedded Android OS with just one app?
kiosk
22,911,156
17
So, I need to build a kiosk type of application for use in an internet cafe. The app needs to load and display some options of things to do. One option is to launch IE to surf. Another option is to play a game. I've been reading that what I probably want to do is replace the windows shell and have it run my app when the OS loads. I'd also have to disable the task manager. This is a multipart question. Can I use dotnet to create this? What OS do I have to use? I keep seeing windows xp embedded pop up in my readings Will there be any issues with the app occasionally loading IE? Are there any other tasks that I should be aware of when doing this? Other than task manager and replacing the shell. If I can do it in c#, is there anything in particular that I should know about? Maybe my forms have to inherit certain classes, etc...
You should check out Microsoft Steady State It has plenty features and are free to use. Windows SteadyState Features Whether you manage computers in a school computer lab or an Internet cafe, a library, or even in your home, Windows SteadyState helps make it easy for you to keep your computers running the way you want them to, no matter who uses them. Windows Disk Protection – Help protect the Windows partition, which contains the Windows operating system and other programs, from being modified without administrator approval.Windows SteadyState allows you to set Windows Disk Protection to remove all changes upon restart, to remove changes at a certain date and time, or to not remove changes at all. If you choose to use Windows Disk Protection to remove changes, any changes made by shared users when they are logged on to the computer are removed when the computer is restarted User Restrictions and Settings – The user restrictions and settings can help to enhance and simplify the user experience. Restrict user access to programs, settings, Start menu items, and options in Windows. You can also lock shared user accounts to prevent changes from being retained from one session to the next. User Account Manager – Create and delete user accounts. You can use Windows SteadyState to create user accounts on alternative drives that will retain user data and settings even when Windows Disk Protection is turned on. You can also import and export user settings from one computer to another—saving valuable time and resources. Computer Restrictions – Control security settings, privacy settings, and more, such as preventing users from creating and storing folders in drive C and from opening Microsoft Office documents from Internet Explorer®. Schedule Software Updates – Update your shared computer with the latest software and security updates when it is convenient for you and your shared users. Download: http://www.microsoft.com/downloads/details.aspx?displaylang=en&FamilyID=d077a52d-93e9-4b02-bd95-9d770ccdb431
kiosk
3,581,059
14
I have a kiosk mode application which hides all traces of the System UI (Notification bar and navigation buttons). On versions of Android pre-Lollipop the following works fine (as root): service call activity 42 s16 com.android.systemui In Lollipop however, this makes the screen completely black as well as hiding the System UI. For this reason it cannot be used. Does anyone know of a workaround for this? I have tried the Device Owner/Admin solution for Screen Pinning, but unfortunately this is not acceptable because it does not hide the System UI entirely, but leaves the back button visible when swiping form the bottom of the screen.
If the device is rooted you could disable the systemui pm disable-user com.android.systemui and then the device-owner method works fine. This method should not be used if the device runs other apps, because if your app crashes the systemui might be disabled and the user can't interact with the device. <?xml version='1.0' encoding='utf-8' standalone='yes' ?> &device-owner package="com.mycompany" name="*mycompany" />
kiosk
27,942,053
14
How can I programmatically enable/disable an android screen reader service such as TalkBack? I am developing a kiosk type application that will be installed on an Android device that will be loaned to visitors while they visit a particular museum. (We are still in the process of determining what device we will use.) The plan is to only allow users to use our app and not have access to the android settings application. However, we'd like to allow users to configure some accessibility settings. When they are finished with the device, we need to restore all the settings to our defaults. The discussion at the link below has many suggesting launching Android's Settings app. But we don't want users accessing many other settings. How to Programmatically Enable/Disable Accessibility Service in Android
Only system apps can enable/disable accessibility service programatically. System apps can directly write in settings secure db to start accessibility service. Settings.Secure.putString(getContentResolver(),Settings.Secure.ENABLED_ACCESSIBILITY_SERVICES, "com.packagename/com.packagename.componentname"); Following permission is required to write in settings secure db: <uses-permission android:name="android.permission.WRITE_SECURE_SETTINGS" /> For non system apps, only way to start accessibility service is direct them to accessibility settings screen via intent and let user manually start the service : Intent intent = new Intent(Settings.ACTION_ACCESSIBILITY_SETTINGS);
kiosk
38,360,198
14
I have chrome opening in kiosk mode - I added the --kiosk flag to the chrome shortcut which works as expected. The kiosk allows browsing of our intranet and the internet. I realise I can use javascript to redirect pages on our intranet, but what about the internet? We don't want people fpr example browsing to YouTube and then walking away. We would like to have the browser re-direct to www.MyDomain.com after x minutes of inactivity. I have tried Kiosk here which does exactly what we require but the swipe left/right gestures don't seem to work for page navigation (already contacted the developer via github). Any suggestions?
I managed to find an answer to this question on another site. Ended up using a chrome extension called Idle Reset. Hopefully it helps somebody else.
kiosk
33,284,153
12
I wish to set up what is usually called a Kiosk, running Firefox locked down to our own specific home page (and links from there). The base operating system is CentOs 5 (i.e. just like RedHat Enterprise 5). Ideally I want Firefox to start full screen (and I have installed the full-fullscreen addon to help with this), and to be locked as such (i.e. F11 does not work). I need to be able to install this system using one or more rpm files. I have tested my fullscreen Firefox setup rpm under Gnome, and it works fine - my Gnome desktop is 1024x768, and the selected home page comes up exactly filling the screen - looks great. However, I do not want to bother with a desktop environment (like Gnome or KDE), just run Firefox as the sole X client program, with a fixed screen size of 1024x768. I have built rpms to install X, configure it to run at 1024x768, and fire up X automatically from an autologin using shell scripts. My main autologon script contains this: startx ~/client/xClient.sh -- :1 & xClient.sh contains this: while [ true ] do firefox done My problem is that Firefox does not come up full screen under this setup. The firefox window is smaller than the screen, and the top left corner is off the screen - this means the web page gets scrollbars, the top and left of the page does not show, and there is a black area along the bottom and right. Does anyone know the reason for this behaviour? What solutions can you suggest? I suppose, if necessary, I could install Gnome on the machine, and then try to lock it down - but it seems silly to add something as complex as Gnome just to get the window to appear the right size, and in the right place! Plus there is the extra task of trying to lock Gnome down so the users can't do anything else with the machine. If you think this question should not be on Stack Overflow, please tell me where it should go. (I think writing rpm and shell scripts are programming, but maybe they don't count? If not, sorry!)
You have 2 options. You install a kiosk plug-in, that allows you to start firefox automatically in full screen mode (amongst other things). One example would be R-kiosk Or you skip firefox and create a xul application that does what you want. You can find a sample application here. And you can find full screen code (not tested) here.
kiosk
9,586,290
11
What exactly does kiosk: true in the BrowserWindow config of a new ElectronJS window do? The documentation just states that the parameter indicates, that the window is in 'kiosk' mode. I was unable to find information on what this means.
Basically, Kiosk mode is a Windows operating system (OS) feature that only allows one application to run. Kiosk mode is a common way to lock down a Windows device when that device is used for a specific task or used in a public setting. So in electron kiosk mode, we'd have the ability to lock down our application to a point that users are restricted to the actions that we want them to perform. Also, the browser would merely act as our canvas with exactly defined capabilities and doesn't get into our way. And this is why you want to use Electron!
kiosk
70,456,451
11
I have a homemade Sinatra application for which I intend to use Heroku to host it. I use foreman and shotgun in development, with the following Procfile: web: shotgun config.ru -s thin -o 0.0.0.0 -p $PORT -E $RACK_ENV It works great with both development and production. But the thing is, I don't want to use shotgun in production since it's too slow. Can we use separate Procfile configurations for both dev and prod?
Use multiple Procfiles and specify -f or --procfile running option to select one: In dev (Procfile.dev contains your shotgun web process): foreman start -f Procfile.dev In production, foreman start will pick up the normal Procfile. Alternatively, you could create a bin directory in your app with a script to start the appropriate web server depending on $RACK_ENV (an idea I found in a comment made by the creator of Foreman, so worth considering).
Foreman
11,592,798
78
When I run foreman I get the following: > foreman start 16:47:56 web.1 | started with pid 27122 Only if I stop it (via ctrl-c) it shows me what is missing: ^CSIGINT received 16:49:26 system | sending SIGTERM to all processes 16:49:26 web.1 | => Booting Thin 16:49:26 web.1 | => Rails 3.0.0 application starting in development on http://0.0.0.0:5000 16:49:26 web.1 | => Call with -d to detach 16:49:26 web.1 | => Ctrl-C to shutdown server 16:49:26 web.1 | >> Thin web server (v1.3.1 codename Triple Espresso) 16:49:26 web.1 | >> Maximum connections set to 1024 16:49:26 web.1 | >> Listening on 0.0.0.0:5000, CTRL+C to stop 16:49:26 web.1 | >> Stopping ... 16:49:26 web.1 | Exiting 16:49:26 web.1 | >> Stopping ... How do I fix it?
I’ve been able to resolve this issue by 2 different ways: From https://github.com/ddollar/foreman/wiki/Missing-Output: If you are not seeing any output from your program, there is a likely chance that it is buffering stdout. Ruby buffers stdout by default. To disable this behavior, add this code as early as possible in your program: # ruby $stdout.sync = true By installing foreman via the heroku toolbelt package But I still don’t know what’s happening nor why this 2 ways above resolved the issue…
Foreman
8,717,198
55
Can you comment out lines in a .env file read by foreman?
FWIW, '#' appears to work as a comment character. It at least has the effect of removing unwanted environment declarations. It might be declaring others starting with a #, but... it still works. EG DATABASE_URL=postgres://mgregory:@localhost/mgregory #DATABASE_URL=mysql://root:secret@localhost:3306/cm_central results in postgres being used by django when started by foreman with this .env file, which is what I wanted.
Foreman
26,713,508
50
I want to be able to set environment variables in my Django app for tests to be able to run. For instance, my views rely on several API keys. There are ways to override settings during testing, but I don't want them defined in settings.py as that is a security issue. I've tried in my setup function to set these environment variables, but that doesn't work to give the Django application the values. class MyTests(TestCase): def setUp(self): os.environ['TEST'] = '123' # doesn't propogate to app When I test locally, I simply have an .env file I run with foreman start -e .env web which supplies os.environ with values. But in Django's unittest.TestCase it does not have a way (that I know) to set that. How can I get around this?
The test.support.EnvironmentVarGuard is an internal API that might be changed from version to version with breaking (backward incompatible) changes. In fact, the entire test package is internal use only. It was explicitly stated on the test package documentation page that it's for internal testing of core libraries and NOT a public API. (see links below) You should use patch.dict() in python's standard lib unittest.mock. It can be used as a context manager, decorator or class decorator. See example code below copied from the official Python documentation. import os from unittest.mock import patch with patch.dict('os.environ', {'newkey': 'newvalue'}): print(os.environ['newkey']) # should print out 'newvalue' assert 'newkey' in os.environ # should be True assert 'newkey' not in os.environ # should be True Update: for those who doesn't read the documentation thoroughly and might have missed the note, read more test package notes at https://docs.python.org/2/library/test.html or https://docs.python.org/3/library/test.html
Foreman
31,195,183
46
I have been attempting to complete this tutorial, but have run into a problem with the foreman start line. I am using a windows 7, 64 bit machine and am attempting to do this in the git bash terminal provided by the Heroku Toolbelt. When I enter foreman start I receive: sh.exe": /c/Program Files (x86)/Heroku/ruby-1.9.2/bin/foreman: "c:/Program: bad interpreter: No such file or directory So I tried entering the cmd in git bash by typing cmd and then using foreman start (similar to a comment on one of the answers to this question suggests). This is what that produced: Bad file descriptor c:/Program Files (x86)/Heroku/ruby-1.9.2/lib/ruby/gems/1.9.1/gems/foreman-0.62.0 /lib/foreman/engine.rb:377:in `read_nonblock' c:/Program Files (x86)/Heroku/ruby-1.9.2/lib/ruby/gems/1.9.1/gems/foreman-0.62.0 /lib/foreman/engine.rb:377:in `block (2 levels) in watch_for_output' c:/Program Files (x86)/Heroku/ruby-1.9.2/lib/ruby/gems/1.9.1/gems/foreman-0.62.0 /lib/foreman/engine.rb:373:in `loop' c:/Program Files (x86)/Heroku/ruby-1.9.2/lib/ruby/gems/1.9.1/gems/foreman-0.62.0 /lib/foreman/engine.rb:373:in `block in watch_for_output' 21:06:08 web.1 | exited with code 1 21:06:08 system | sending SIGKILL to all processes I have no clue what the second set of errors is trying to tell me, since the file location it seems to claim engine.rb is running from does not even exist on my computer. I have looked at other answers to similar problems, however I am not receiving similar errors and so do not believe a solution to my problem currently exists.
I had this problem. I fixed it by uninstalling version 0.62 of the foreman gem and installing 0.61. gem uninstall foreman gem install foreman -v 0.61
Foreman
15,399,637
41
I think this is a little, easy question! I'm using .env file to keep all my environment variables, and i'm using foreman. Unfortunately, these environment variables are not being loaded when running rails console rails c so, i'm now loading them manually after running the console, which is not the best way. I'd like to know if there any better way for that.
About a year ago, the "run" command was added to foreman ref: https://github.com/ddollar/foreman/pull/121 You can use it as follow: foreman run rails console or foreman run rake db:migrate
Foreman
15,370,814
34
I have this simple Procfile web: myapp myapp is in the path, but the processes home directory should be ./directory/. How can I specify in the Procfile where the process is to be started? https://github.com/ddollar/foreman/pull/101 doesn't help because it assumes, that this working directory should be the same for every process specified by the Procfile
The shell is the answer. It's as simple as web: sh -c 'cd ./directory/ && exec appname'
Foreman
13,284,310
27
I installed redis this afternoon and it caused a few errors, so I uninstalled it but this error is persisting when I launch the app with foreman start. Any ideas on a fix? foreman start 22:46:26 web.1 | started with pid 1727 22:46:26 web.1 | 2013-05-25 22:46:26 [1727] [INFO] Starting gunicorn 0.17.4 22:46:26 web.1 | 2013-05-25 22:46:26 [1727] [ERROR] Connection in use: ('0.0.0.0', 5000)
Just type sudo fuser -k 5000/tcp .This will kill all process associated with port 5000
Foreman
16,756,624
22
A web app I am writing in JavaScript using node.js. I use Foreman, but I don't want to manually restart the server every time I change my code. Can I tell Foreman to reload the entire web app before handling an HTTP request (i.e. restart the node process)?
Here's an adjusted version of Pendlepants solution. Foreman looks for an .env file to read environment variables. Rather than adding a wrapper, you can just have Foreman switch what command it uses to start things up: In .env: WEB=node app.js In dev.env: WEB=supervisor app.js In your Procfile: web: $WEB By default, Foreman will read from .env (in Production), but in DEV just run this: foreman start -e dev.env
Foreman
9,131,496
21
I am trying to export my application to another process management format/system (specifically, upstart). In doing so, I have come across a number of roadblocks, mostly due to lacking documentation. As a non-root user, I ran the following command (as shown here): -bash> foreman export upstart /etc/init ERROR: Could not create: /etc/init I "could not create" the directory due to inadequate permissions, so I used sudo: -bash> sudo foreman export upstart /etc/init Password: ERROR: Could not chown /var/log/app to app I "could not chown... to app" because there is no user named app. Where is app coming from? How should I use forman to export to upstart?
app is default for both the name of the app and the name of the user the application should be run as when the corresponding options (--app and --user) are not used. See the foreman man page for the available options, but note that at the time of this writing the official synopsis did not include [options]: foreman export [options] <format> [location] Example: -bash> sudo foreman export --app foo --user bar upstart /etc/init Password: [foreman export] writing: foo.conf [foreman export] writing: foo-web.conf [foreman export] writing: foo-web-1.conf [foreman export] writing: foo-worker.conf [foreman export] writing: foo-worker-1.conf Result: -bash> l /etc/init/ total 80 drwxr-xr-x 12 root wheel 408 20 Oct 09:31 . drwxr-xr-x 94 root wheel 3196 20 Oct 08:05 .. -rw-r--r-- 1 root wheel 236 20 Oct 09:31 foo-web-1.conf -rw-r--r-- 1 root wheel 41 20 Oct 09:31 foo-web.conf -rw-r--r-- 1 root wheel 220 20 Oct 09:31 foo-worker-1.conf -rw-r--r-- 1 root wheel 41 20 Oct 09:31 foo-worker.conf -rw-r--r-- 1 root wheel 315 20 Oct 09:31 foo.conf -bash> l /var/log/foo/ total 0 drwxr-xr-x 2 bar wheel 68 20 Oct 09:31 . drwxr-xr-x 45 root wheel 1530 20 Oct 09:31 ..
Foreman
12,990,842
19
I'm following the heroku tutorial for Heroku/Facebook integration (but I suspect this issue has nothing to do with facebook integration) and I got stuck on the stage where I was supposed to start foreman (I've installed the Heroku installbelt for windows, which includes foreman): > foreman start gives: C:/RailsInstaller/Ruby1.8.7/lib/ruby/site_ruby/1.8/rubygems/dependency.rb:247:in `to_specs': Could not find foreman (>= 0) amongst [POpen4-0.1.4, Platform-0.4.0, ZenTest-4.6.2, abstract-1.0.0, actionm ailer-3.0.11, actionmailer-3.0.9, actionpack-3.0.11, actionpack-3.0.9, activemodel-3.0.11, activemodel-3.0.9, activerecord-3.0.11, activerecord-3.0.9, activerecord-sqlserver-adapter-3.0.15, activereso urce-3.0.11, activeresource-3.0.9, activesupport-3.0.11, activesupport-3.0.9, addressable-2.2.6, annotate-2.4.0, arel-2.0.10, autotest-4.4.6, autotest-growl-0.2.16, autotest-rails-pure-4.1.2, autotest -standalone-4.5.8, builder-2.1.2, bundler-1.0.15, diff-lcs-1.1.3, erubis-2.6.6, factory_girl-1.3.3, factory_girl_rails-1.0, faker-0.3.1, gravatar_image_tag-1.0.0.pre2, heroku-2.14.0, i18n-0.5.0, json- 1.6.1, launchy-2.0.5, mail-2.2.19, mime-types-1.17.2, mime-types-1.16, nokogiri-1.5.0-x86-mingw32, open4-1.1.0, pg-0.11.0-x86-mingw32, polyglot-0.3.3, polyglot-0.3.1, rack-1.2.4, rack-1.2.3, rack-moun t-0.6.14, rack-test-0.5.7, rails-3.0.11, rails-3.0.9, railties-3.0.11, railties-3.0.9, rake-0.9.2.2, rake-0.8.7, rb-readline-0.4.0, rdoc-3.11, rdoc-3.8, rest-client-1.6.7, rspec-2.6.0, rspec-core-2.6. 4, rspec-expectations-2.6.0, rspec-mocks-2.6.0, rspec-rails-2.6.1, rubygems-update-1.8.11, rubyzip-0.9.4, rubyzip2-2.0.1, spork-0.9.0.rc8-x86-mingw32, sqlite3-1.3.3-x86-mingw32, sqlite3-ruby-1.3.3, te rm-ansicolor-1.0.7, thor-0.14.6, tiny_tds-0.4.5-x86-mingw32, treetop-1.4.10, treetop-1.4.9, tzinfo-0.3.31, tzinfo-0.3.29, webrat-0.7.1, will_paginate-3.0.pre2, win32-api-1.4.8-x86-mingw32, win32-open3 -0.3.2-x86-mingw32, win32-process-0.6.5, windows-api-0.4.0, windows-pr-1.2.1, zip-2.0.2] (Gem::LoadError) from C:/RailsInstaller/Ruby1.8.7/lib/ruby/site_ruby/1.8/rubygems/dependency.rb:256:in `to_spec' from C:/RailsInstaller/Ruby1.8.7/lib/ruby/site_ruby/1.8/rubygems.rb:1210:in `gem' from C:/Program Files (x86)/ruby-1.9.3/bin/foreman:18 Since I'm a complete noob in this I'm not sure if my question here is a duplicate for Error on 'foreman start' while following the Python/Flask Heroku tutorial (because it's not quite the same error). If so, does anyone have a method for deploying a development environment on windows (for Heruko, Python, Facebook app)? Or should I use Ubuntu for this? Thanks
Although this question doesn't seem to be of interest to anyone here (5 views in ~2 hours, 0 answers, 0 comments...), I have found the solution and ready to share it with anyone that will encounter it: Install the latest ruby from rubyinstaller.org (1.9.3-p194) - Sometimes there is a collision installs of the same version, in my case I've just uninstalled all versions of ruby, but if you already have other application that needs older version then you have to be more careful Check that your system is default to use this version by invoking ruby -v in command line prompt: and getting ruby 1.9.3p194 (2012-04-20) [i386-mingw32] (you may have to close and re-open cmd, to include the new environment variables) Still in cmd, invoke: gem install foreman gem install taps now go to your Procfile app (e.g. your heroku example app from the tutorial) and execute foreman start, you should see something like this: 18:23:52 web.1 | started with pid 7212 18:23:54 web.1 | * Running on http://0.0.0.0:5000/ 18:23:54 web.1 | * Restarting with reloader
Foreman
11,434,287
18
I simply followed the getting started with nodejs tutorial from Heroku. https://devcenter.heroku.com/articles/getting-started-with-nodejs#declare-process-types-with-procfile But I get an error at the part "declare process types with procfile" My problem is that my cmd (using windows 7) didn't find the command "foreman" Any solutions ? I downloaded/installed the heroku toolbelt, the login works fine, but foreman dont
I had the same problem on Windows7 64-bit, using git's bash. Here's what I did: uninstall the toolbelt, Ruby, and Git using Control Panel's "Program and Features" reinstall the toolbelt to C:\Heroku (see known issue for more info) add C:\Program Files (x86)\git\bin;C:\Heroku\ruby-1.9.2\bin to the system PATH variable: Control Panel, System, Advanced system settings, Environment Variables..., System variables, Variable Path, Edit... (Change ruby-1.9.2 if a future version of the toolbelt includes a newer version of Ruby.) open a git bash window and uninstall foreman version 0.63$ gem uninstall foreman then install version 0.61 (see here for more info)$ gem install foreman -v 0.61 Now foreman worked for me: $ foreman start
Foreman
19,078,939
18
I am trying to use foreman to start my rails app. Unfortunately I have difficulties connecting my IDE for debugging. I read here about using Debugger.wait_connection = true Debugger.start_remote to start a remote debugging session, but that does not really work out. Question: Is there a way to debug a rails (3.2) app started by foreman? If so, what is the approach?
If you use several workers with full rails environment you could use the following initializer: # Enabled debugger with foreman, see https://github.com/ddollar/foreman/issues/58 if Rails.env.development? require 'debugger' Debugger.wait_connection = true def find_available_port server = TCPServer.new(nil, 0) server.addr[1] ensure server.close if server end port = find_available_port puts "Remote debugger on port #{port}" Debugger.start_remote(nil, port) end And in the foreman's logs you'll be able to find debugger's ports: $ foreman start 12:48:42 web.1 | started with pid 29916 12:48:42 worker.1 | started with pid 29921 12:48:44 web.1 | I, [2012-10-30T12:48:44.810464 #29916] INFO -- : listening on addr=0.0.0.0:5000 fd=10 12:48:44 web.1 | I, [2012-10-30T12:48:44.810636 #29916] INFO -- : Refreshing Gem list 12:48:47 web.1 | Remote debugger on port 59269 12:48:48 worker.1 | Remote debugger on port 41301 Now run debugger using: rdebug -c -p [PORT]
Foreman
9,558,576
17
I am trying to deploy an Heroku app. I must be doing something wrong with the Procfile. When I run foreman check I get this error. ERROR: no processes defined I get pretty much the same thing when deploying on Heroku -----> Building runtime environment -----> Discovering process types ! Push failed: cannot parse Procfile. The Procfile looks like this web: node app.js What did I miss? update I re-did all from the start, It works properly now. I think I might have issue with Unix line ending
Just encounter "Push failed: cannot parse Procfile." on Windows. I can conclude that It IS "Windows-file format" problem, NOT the context of file itself. make sure to create a clean file, maybe use Notepad++ or other advanced editor to check the file type.
Foreman
19,846,342
17
We have rails app that is running some foreman processes with bundle exec foreman start, and have googled a lot of different things, and found that the common suggestion is to set up another background process handler, and export the processes there. So essentially let someone else do foreman's job of managing the processes. My question is how do you simply stop or restart foreman processes, as I don't really want to try to export the processes to another manager. Shouldn't there be a simple: foreman restart Since there is a: foreman start Is there a snippet or some other command that anyone has used to restart these processes? Any help or explanation of the foreman tool would be appreciated.
Used monit to control stop and start of foreman processes.
Foreman
18,925,483
16
I have the following Procfile: web: bundle exec unicorn -p $PORT -c ./config/unicorn.rb redis: bundle exec redis-server /usr/local/etc/redis.conf worker: bundle exec sidekiq Running $ foreman start starts up Unicorn, Redis and Sidekiq, but how should i stop them again? Killing Foreman leaves all three up. I can see this using ps: $ ps aux | grep redis | grep -v grep me 61560 0.0 0.0 2506784 1740 s000 S+ 9:36am 0:01.28 redis-server /usr/local/etc/redis.conf $ ps aux | grep sidekiq | grep -v grep me 61561 0.0 1.0 2683796 173284 s000 S+ 9:36am 0:14.18 sidekiq 2.17.0 pathways [0 of 25 busy] $ ps aux | grep unicorn | grep -v grep me 61616 0.0 0.2 2615284 28312 s000 S+ 9:37am 0:00.06 unicorn worker[2] -p 5000 -c ./config/unicorn.rb me 61615 0.0 0.2 2615284 27920 s000 S+ 9:37am 0:00.06 unicorn worker[1] -p 5000 -c ./config/unicorn.rb me 61614 0.0 0.2 2615284 27772 s000 S+ 9:37am 0:00.06 unicorn worker[0] -p 5000 -c ./config/unicorn.rb me 61559 0.0 1.0 2615284 160988 s000 S+ 9:36am 0:09.87 unicorn master -p 5000 -c ./config/unicorn.rb So obviously I can manually kill each process, but how can I kill all at once? It doesn't seem like Foreman supports this.
To kill them all with a one-liner: $ kill $(ps aux | grep -E 'redis|sidekiq|unicorn' | grep -v grep | awk '{print $2}')
Foreman
20,190,152
14
I know for a fact that Flask, in debug mode, will detect changes to .py source code files and will reload them when new requests come in. I used to see this in my app all the time. Change a little text in an @app.route decoration section in my views.py file, and I could see the changes in the browser upon refresh. But all of a sudden (can't remember what changed), this doesn't seem to work anymore. Q: Where am I going wrong? I am running on a OSX 10.9 system with a VENV setup using Python 2.7. I use foreman start in my project root to start it up. App structure is like this: [Project Root] +-[app] | +-__init__.py | +- views.py | +- ...some other files... +-[venv] +- config.py +- Procfile +- run.py The files look like this: # Procfile web: gunicorn --log-level=DEBUG run:app # config.py contains some app specific configuration information. # run.py from app import app if __name__ == "__main__": app.run(debug = True, port = 5000) # __init__.py from flask import Flask from flask.ext.login import LoginManager from flask.ext.sqlalchemy import SQLAlchemy from flask.ext.mail import Mail import os app = Flask(__name__) app.config.from_object('config') db = SQLAlchemy(app) #mail sending mail = Mail(app) lm = LoginManager() lm.init_app(app) lm.session_protection = "strong" from app import views, models # app/views.py @app.route('/start-scep') def start_scep(): startMessage = '''\ <html> <header> <style> body { margin:40px 40px;font-family:Helvetica;} h1 { font-size:40px; } p { font-size:30px; } a { text-decoration:none; } </style> </header> <p>Some text</p> </body> </html>\ ''' response = make_response(startMessage) response.headers['Content-Type'] = "text/html" print response.headers return response
The issue here, as stated in other answers, is that it looks like you moved from python run.py to foreman start, or you changed your Procfile from # Procfile web: python run.py to # Procfile web: gunicorn --log-level=DEBUG run:app When you run foreman start, it simply runs the commands that you've specified in the Procfile. (I'm going to guess you're working with Heroku, but even if not, this is nice because it will mimic what's going to run on your server/Heroku dyno/whatever.) So now, when you run gunicorn --log-level=DEBUG run:app (via foreman start) you are now running your application with gunicorn rather than the built in webserver that comes with Flask. The run:app argument tells gunicorn to look in run.py for a Flask instance named app, import it, and run it. This is where it get's fun: since the run.py is being imported, __name__ == '__main__' is False (see more on that here), and so app.run(debug = True, port = 5000) is never called. This is what you want (at least in a setting that's available publicly) because the webserver that's built into Flask that's used when app.run() is called has some pretty serious security vulnerabilities. The --log-level=DEBUG may also be a bit deceiving since it uses the word "DEBUG" but it's only telling gunicorn which logging statements to print and which to ignore (check out the Python docs on logging.) The solution is to run python run.py when running the app locally and working/debugging on it, and only run foreman start when you want to mimic a production environment. Also, since gunicorn only needs to import the app object, you could remove some ambiguity and change your Procfile to # Procfile web: gunicorn --log-level=DEBUG app:app You could also look into Flask Script which has a built in command python manage.py runserver that runs the built in Flask webserver in debug mode.
Foreman
23,400,599
14
I am having trouble getting my dynos to run multiple delayed job worker processes. My Procfile looks like this: worker: bundle exec script/delayed_job -n 3 start and my delayed_job script is the default provided by the gem: #!/usr/bin/env ruby require File.expand_path(File.join(File.dirname(__FILE__), '..', 'config', 'environment')) require 'delayed/command' Delayed::Command.new(ARGV).daemonize When I try to run this either locally or on a Heroku dyno it exits silently and I can't tell what is going on. foreman start 16:09:09 worker.1 | started with pid 75417 16:09:15 worker.1 | exited with code 0 16:09:15 system | sending SIGTERM to all processes SIGTERM received Any help with either how to debug the issue or suggestions about other ways to go about running multiple workers on a single dyno it would be greatly appreciated.
You can use foreman to start multiple processes on the same dyno. First, add foreman to your Gemfile. Then add a worker line to your Procfile: worker: bundle exec foreman start -f Procfile.workers Create a new file called Procfile.workers which contains: dj_worker: bundle exec rake jobs:work dj_worker: bundle exec rake jobs:work dj_worker: bundle exec rake jobs:work That will start 3 delayed_job workers on your worker dyno.
Foreman
24,792,399
12
binding.pry not works(console input not available) if i start the server with bin/dev command. It only works with bin/rails s command. I understand it has something to do with foreman and Procfile.dev, but I don't know how. Is this a bug or is it supposed to be like this?
With bin/dev, the Procfile.dev file is run with foreman. The pry issue is caused by the CSS and JS watchers:these just listen to changes in your CSS and JS files. What you can do is remove the web: unset PORT && bin/rails server command from your Procfile, so it will only have the CSS and JS watchers and look like this: js: yarn build --watch css: yarn build:css --watch Now you'll have to open two terminals, one with bin/rails s and the other with foreman start -f Procfile.dev. This way your pry works in the server terminal as normal and the watchers are watching as normal.
Foreman
72,532,475
12
I want the foreman gem to use the PORT value provided in the my development env file instead of using its own values. My files setup is shown below: A bash script to start foreman: foreman start -e development.env The development.env file content: PORT=3000 The Procfile content web: bundle exec rails server thin -p $PORT -e $RAILS_ENV $1 The dev server ends up starting on port 5000. I know I can start foreman with --p 3000 to force it to use that port. But that defeats the purpose of the env file. Any suggestions?
I know this is an old post but it took me a while to figure out so might as well add a note here. Foreman increments the PORT based on where your define the service in the Procfile. Say our PORT environment variable is set to 3000. In our first Procfile example Puma will run on PORT 3000: web: bundle exec puma -q -p $PORT worker: bundle exec rake jobs:work But in our second Procfile it will run on PORT 3100 as the PORT variable is used on the second line. worker: bundle exec rake jobs:work web: bundle exec puma -q -p $PORT Not sure why, I guess to prevent different processes from trying to take the same PORT.
Foreman
9,804,184
11
I have the following Rake task: namespace :foreman do task :dev do `foreman start -f Procfile.dev` end end desc "Run Foreman using Procfile.dev" task :foreman => 'foreman:dev' The forman command works fine from the shell, however when I run rake foreman I get the following error: /Users/me/.gem/ruby/2.0.0/gems/bundler-1.5.2/lib/bundler/rubygems_integration.rb:240:in `block in replace_gem': foreman is not part of the bundle. Add it to Gemfile. (Gem::LoadError) from /Users/me/.gem/ruby/2.0.0/bin/foreman:22:in `<main>' Forman specifically states: Ruby users should take care not to install foreman in their project's Gemfile So how can I get this task to run?
If you must make it work via rake, try changing the shell-out via backtick to use a hard-coded path to the system-wide foreman binary `/global/path/to/foreman start -f Procfile.dev` You just need to use 'which' or 'locate' or a similar tool to determine the path that works outside your bundler context. If you are using rbenv, then this might be sufficient : $ rbenv which rake /home/name/.rbenv/versions/1.9.3-p448/bin/rake I hope that helps you move forward.
Foreman
27,189,450
11
Is there a way to download and install heroku toolbelt components individually, or at least without the bundled git? Heroku Toolbelt comes with git bundled in. Last time I downloaded it and installed it, it overwrote my existing git installation. Heroku Toolbelt bundles an older version of git and I require at least 1.7.10. Is there a way to just install heroku and foreman? This seems a little weird that there isn't such an option considering most heroku users would be developer likely to have git already.
It's just Foreman, Git, and the Heroku CLI client. If you already have Git and Foreman, you can just install the CLI from the command line, wget -qO- https://toolbelt.heroku.com/install.sh | sh The Windows installer offers the same options.
Foreman
12,322,473
10
we are trying to install couple of python packages without internet. For ex : python-keystoneclient For that we have the packages downloaded from https://pypi.python.org/pypi/python-keystoneclient/1.7.1 and kept it in server. However, while installing tar.gz and .whl packages , the installation is looking for dependent packages to be installed first. Since there is no internet connection in the server, it is getting failed. For ex : For python-keystoneclient we have the following dependent packages stevedore (>=1.5.0) six (>=1.9.0) requests (>=2.5.2) PrettyTable (<0.8,>=0.7) oslo.utils (>=2.0.0) oslo.serialization (>=1.4.0) oslo.i18n (>=1.5.0) oslo.config (>=2.3.0) netaddr (!=0.7.16,>=0.7.12) debtcollector (>=0.3.0) iso8601 (>=0.1.9) Babel (>=1.3) argparse pbr (<2.0,>=1.6) When i try to install packages one by one from the above list, once again its looking for nested dependency . Is there any way we could list ALL the dependent packages for installing a python module like python-keystoneclient.
This is how I handle this case: On the machine where I have access to Internet: mkdir keystone-deps pip download python-keystoneclient -d "/home/aviuser/keystone-deps" tar cvfz keystone-deps.tgz keystone-deps Then move the tar file to the destination machine that does not have Internet access and perform the following: tar xvfz keystone-deps.tgz cd keystone-deps pip install python_keystoneclient-2.3.1-py2.py3-none-any.whl -f ./ --no-index You may need to add --no-deps to the command as follows: pip install python_keystoneclient-2.3.1-py2.py3-none-any.whl -f ./ --no-index --no-deps
OpenStack
36,725,843
98
Are there any differences in images of Docker and Virtual Machine? Except the image formats, I couldn't find any info on this anywhere. Please comment out on the things like image size, instance creation time, capture time, etc. Thanks!
These are some differences between a docker and a VM image which I could list out: 1. Snapshot process is faster in Docker than VMs We generally start with a base image, and then make our changes, and commit those changes using docker, and it creates an image. This image contains only the differences from the base. When we want to run our image, we also need the base, and it layers our image on top of the base using a layered file system. File system merges the different layers together and we get what we want, and we just need to run it. Since docker typically builds on top of ready-made images from a registry, we rarely have to "snapshot" the whole OS ourself. This ability of Dockers to snapshot the OS into a common image also makes it easy to deploy on other docker hosts. 2. Startup time is less for Docker than VMs A virtual machine usually takes minutes to start, but containers takes seconds, and sometime even less than a second. 4. Docker images have more portability Docker images are composed of layers. When we pull or transfer an image, only the layers we haven’t yet in cache are retrieved. That means that if we use multiple images based on the same base Operating System, the base layer is created or retrieved only once. VM images doesn't have this flexibility. 5. Docker provides versioning of images We can use the docker commit command. We can specify two flags: -m and -a. The -m flag allows us to specify a commit message, much like we would with a commit on a version control system: $ sudo docker commit -m "Added json gem" -a "Kate Smith" 0b2616b0e5a8 ouruser/sinatra:v2 4f177bd27a9ff0f6dc2a830403925b5360bfe0b93d476f7fc3231110e7f71b1c 6. Docker images do not have states In Docker terminology, a read-only Layer is called an image. An image never changes. Since Docker uses a Union File System, the processes think the whole file system is mounted read-write. But all the changes go to the top-most writeable layer, and underneath, the original file in the read-only image is unchanged. Since images don't change, images do not have state. 7. VMs are hardware-centric and docker containers are application-centric Let's say we have a container image that is 1GB in size. If we wanted to use a Full VM, we would need to have 1GB times x number of VMs you want. In docker container we can share the bulk of the 1GB and if you have 1000 containers we still might only have a little over 1GB of space for the containers OS, assuming they are all running the same OS image. 8. Supported image formats Docker images: bare. The image does not have a container or metadata envelope. ovf. The OVF container format. aki. An Amazon kernel image. ari. An Amazon ramdisk image. ami. An Amazon machine image. VM images: raw. An unstructured disk image format; if you have a file without an extension it is possibly a raw format vhd. The VHD disk format, a common disk format used by virtual machine monitors from VMware, Xen, Microsoft, VirtualBox, and others vmdk. Common disk format supported by many common virtual machine monitors vdi. Supported by VirtualBox virtual machine monitor and the QEMU emulator iso. An archive format for the data contents of an optical disc, such as CD-ROM. qcow2. Supported by the QEMU emulator that can expand dynamically and supports Copy on Write aki. An Amazon kernel image. ari. An Amazon ramdisk image. ami. An Amazon machine image.
OpenStack
29,096,967
28
I'm really trying to understand the under the hood of keystone regarding the relationships among endpoints, regions, tenants, services, users and roles. I've tried to find the related documents but sadly, failed. Could anybody give any pointers or explanations?
Keystone is the identity management service for OpenStack. Essentially it's role is to grant tokens to users be they people, services, or anything at all. If you make an API query anywhere in OpenStack, keystone's API is how it is discovered if you are allowed to make that API query. Let's work our way up from the ground. Users. Users in Keystone today are generally people. There isn't enough fine grained ACL support at this moment to really call many of the users in OpenStack a 'service' account in a traditional sense. But there is a service account that is used as a backhaul connection to the Keystone API as part of the OpenStack infrastructure itself. We'll avoid delving into that anomalous user. When a user authenticates to Keystone ( you hit up the OS_AUTH_URL to talk to keystone.. usually port 5000 of the keystone api box ), the user says hey " I am user X , I have password Y, and I belong to tenant Z". X can be a username or userid ( unique uuid of user ) Y is a password, but you can authenticate with a token as well. Z is a tenant name or tenant id ( unique uuid of tenant ). in past Keystone APIs you didn't NEED to specify a tenant name, but your token wouldn't be very useful if you didn't as the token wouldn't be associated with your tenant and you would then be denied any ACLs on that tenant. So... a user is a fairly obvious thing. A password is a fairly obvious thing. But what's a tenant? Well a tenant is also known as a project. In fact, there have been repeated attempts to make the name be either tenant or project, but as a result of an inability to stick to just one term they both mean the same thing. As far as the API is concerned a project IS a tenant. So if you log into horizon you will see a drop down for your projects. Each project corresponds to a tenant id. Your tokens are associated with a specific tenant id as well. So you may need several tokens for a user if you intend to work on several tenants the user is attached to. Now, say you add a user to the tenant id of admin. Does that user get admin privileges? The answer is no. That's where roles come into play. While the user in the admin tenant may have access to admin virtual machines and quotas for spinning up virtual machines that user wouldn't be able to do things like query keystone for a user list. But if you add an admin role to that user, they will be endowed with the ACL rights to act as an admin in the keystone API, and other APIs. So think of a tenant as a sort of resource group, and roles as an ACL set. regions are more like ways to geographically group physical resources in the openstack infrastructure environment. say you have two segmented data centers. you might put one in region A of your openstack environment and another in region B. regions in terms of their usefulness are quickly evolving, especially with the introduction of cells and domains in more recent openstack releases. You probably don't need to be a master of this knowledge unless you intend to be architecting large clouds. keystone provides one last useful thing. the catalog. the keystone catalog is kind of like the phone book for the openstack APIs. whenever you use a command line client, like when you might call nova list to list your instances, nova first authenticates to keystone and gets you a token to use the API, but it also immediately asks keystone catalog for a list of API endpoints. For keystone, cinder, nova, glance, swift... etc. nova will really only use the nova-api endpoint, though depending on your query you may use the keystone administrative API endpoint.... we'll get back to that. But essentially the catalog is a canonical source of information for where APIs are in the world. That way you only ever need to tell a client where the public API endpoint of keystone is, and it can figure out the rest from the catalog. Now, I've made reference to the public API, and the administrative API for keystone. Yep keystone has two APIs... sort of. It runs an API on port 5000 and another one up in the 32000 range. The 5000 is the public port. This is where you do things like find the catalog, and ask for a token so you can talk to other APIs. It's very simple, and somewhat hardened. The administrative API would be used for things like changing a users password, or adding a new role to a user. Pretty straight forward?
OpenStack
19,004,503
20
I had to install OpenStack using devstack infrastructure for experiements with open vSwitch, and found this in the logs: /usr/lib/python2.7/site-packages/setuptools/dist.py:298: UserWarning: The version specified ('2014.2.2.dev5.gb329598') is an invalid version, this may not work as expected with newer versions of setuptools, pip, and PyPI. Please see PEP 440 for more details. I googled and found PEP440, but I wonder how serious is this warning?
Each Python package can specify its own version. Among other things, PEP440 says that a version specification should be stored in the __version__ attribute of the module, that it should be a string, and that should consist of major version number, minor version number and build number separated by dots (e.g. '2.7.8') give or take a couple of other optional variations. In one of the packages you are installing, the developers appear to have broken these recommendations by using the suffix '.gb329598'. The warning says that this may confuse certain package managers (setuptools and friends) in some circumstances. It seems PEP440 does allow arbitrary "local version labels" to be appended to a version specifier, but these must be affixed with a '+', not a '.'.
OpenStack
27,493,792
20
I'm having a problem with Python generators while working with the Openstack Swift client library. The problem at hand is that I am trying to retrieve a large string of data from a specific url (about 7MB), chunk the string into smaller bits, and send a generator class back, with each iteration holding a chunked bit of the string. in the test suite, this is just a string that's sent to a monkeypatched class of the swift client for processing. The code in the monkeypatched class looks like this: def monkeypatch_class(name, bases, namespace): '''Guido's monkeypatch metaclass.''' assert len(bases) == 1, "Exactly one base class required" base = bases[0] for name, value in namespace.iteritems(): if name != "__metaclass__": setattr(base, name, value) return base And in the test suite: from swiftclient import client import StringIO import utils class Connection(client.Connection): __metaclass__ = monkeypatch_class def get_object(self, path, obj, resp_chunk_size=None, ...): contents = None headers = {} # retrieve content from path and store it in 'contents' ... if resp_chunk_size is not None: # stream the string into chunks def _object_body(): stream = StringIO.StringIO(contents) buf = stream.read(resp_chunk_size) while buf: yield buf buf = stream.read(resp_chunk_size) contents = _object_body() return headers, contents After returning the generator object, it was called by a stream function in the storage class: class SwiftStorage(Storage): def get_content(self, path, chunk_size=None): path = self._init_path(path) try: _, obj = self._connection.get_object( self._container, path, resp_chunk_size=chunk_size) return obj except Exception: raise IOError("Could not get content: {}".format(path)) def stream_read(self, path): try: return self.get_content(path, chunk_size=self.buffer_size) except Exception: raise OSError( "Could not read content from stream: {}".format(path)) And finally, in my test suite: def test_stream(self): filename = self.gen_random_string() # test 7MB content = self.gen_random_string(7 * 1024 * 1024) self._storage.stream_write(filename, io) io.close() # test read / write data = '' for buf in self._storage.stream_read(filename): data += buf self.assertEqual(content, data, "stream read failed. output: {}".format(data)) The output ends up with this: ====================================================================== FAIL: test_stream (test_swift_storage.TestSwiftStorage) ---------------------------------------------------------------------- Traceback (most recent call last): File "/home/bacongobbler/git/github.com/bacongobbler/docker-registry/test/test_local_storage.py", line 46, in test_stream "stream read failed. output: {}".format(data)) AssertionError: stream read failed. output: <generator object _object_body at 0x2a6bd20> I tried isolating this with a simple python script that follows the same flow as the code above, which passed without issues: def gen_num(): def _object_body(): for i in range(10000000): yield i return _object_body() def get_num(): return gen_num() def stream_read(): return get_num() def main(): num = 0 for i in stream_read(): num += i print num if __name__ == '__main__': main() Any help with this issue is greatly appreciated :)
In your get_object method, you're assigning the return value of _object_body() to the contents variable. However, that variable is also the one that holds your actual data, and it's used early on in _object_body. The problem is that _object_body is a generator function (it uses yield). Therefore, when you call it, it produces a generator object, but the code of the function doesn't start running until you iterate over that generator. Which means that when the function's code actually starts running (the for loop in _test_stream), it's long after you've reassigned contents = _object_body(). Your stream = StringIO(contents) therefore creates a StringIO object containing the generator object (hence your error message), not the data. Here's a minimal reproduction case that illustrates the problem: def foo(): contents = "Hello!" def bar(): print contents yield 1 # Only create the generator. This line runs none of the code in bar. contents = bar() print "About to start running..." for i in contents: # Now we run the code in bar, but contents is now bound to # the generator object. So this doesn't print "Hello!" pass
OpenStack
20,429,971
19
I am searching for options that enable dynamic cloud-based NVIDIA GPU virtualization similar to the way AWS assigns GPUs for Cluster GPU Instances. My project is working on standing up an internal cloud. One requirement is the ability to allocate GPUs to virtual-machines/instances for server-side CUDA processing. USC appears to be working on OpenStack enhancements to support this but it isn't ready yet. This would be exactly what I am looking for if it were fully functional in OpenStack. NVIDIA VGX seems to only support allocation of GPUs to USMs, which is strictly remote-desktop GPU virtualization. If I am wrong, and VGX does enable server-side CUDA computing from virtual-machines/instances then please let me know.
"dynamic cloud-based NVIDIA GPU virtualization similar to the way AWS assigns GPUs for Cluster GPU Instances." AWS does not really allocate GPUs dynamically: Each GPU Cluster Compute has 2 fixed GPUs. All other servers (including the regular Cluster Compute) don't have any GPUs. I.e. they don't have an API where you can say "GPU or not", it's fixed to the box type, which uses fixed hardware. The pass-thru mode on Xen was made specifically for your use case: Passing hardware on thru from the Host to the Guest. It's not 'dynamic' by default, but you could write some code that chooses one of the guests to get each card on the host.
OpenStack
14,505,941
15
Does devstack completely install openstack? I read somewhere that devStack is not and has never been intended to be a general OpenStack installer. So what does devstack actually install? Is there any other scripted method available to completely install openstack(grizzly release) or I need to follow the manual installation steps given on openstack website?
devstack does completely install from git openstack. for lesser values of completely anyways. devstack is the version of openstack used in jenkins gate testing by developers committing code to the openstack project. devstack as the name suggests is specifically for developing for openstack. as such it's existence is ephemeral. in short, after running stack.sh the resulting ( probably ) functioning openstack is setup... but upon reboot it will not come back up. there are no upstart or systemd or init.d scripts for restarting services. there is no high availability, no backups, no configuration management. And following the latest git releases in the development branch of openstack can be a great way to discover just how unstable openstack is before a feature freeze. there are several vagrant recipes in the world for deploying openstack, and openstack-puppet is a puppet recipe for deploying openstack. chef also maintains an openstack recipe as well. Grizzly is a bit old now. Havana is the current stable release. https://github.com/stackforge/puppet-openstack http://docs.opscode.com/openstack.html http://cloudarchitectmusings.com/2013/12/01/deploy-openstack-havana-on-your-laptop-using-vagrant-and-chef/ and ubuntu even maintains a system called maas and juju for deploying openstack super quickly on their OS. https://help.ubuntu.com/community/UbuntuCloudInfrastructure http://www.youtube.com/watch?v=mspwQfoYQks so lots of ways to install openstack. however most folks pushing a production cloud use some form of configuration management system. that way they can deploy compute nodes automatically. and recover systems quickly. also check out openstack on openstack. https://wiki.openstack.org/wiki/TripleO
OpenStack
21,729,860
14
My question is similar to this git hub post: https://github.com/hashicorp/terraform/issues/745 It is also related to another stack exchange post of mine: Terraform stalls while trying to get IP addresses of multiple instances? I am trying to bootstrap several servers and there are several commands I need to run on my instances that require the IP addresses of all the other instances. However I cannot access the variables that hold the IP addresses of my newly created instances until they are created. So when I try to run a provisioner "remote-exec" block like this: provisioner "remote-exec" { inline = [ "sudo apt-get update", "sudo apt-get install -y curl", "echo ${openstack_compute_instance_v2.consul.0.network.0.fixed_ip_v4}", "echo ${openstack_compute_instance_v2.consul.1.network.1.fixed_ip_v4}", "echo ${openstack_compute_instance_v2.consul.2.network.2.fixed_ip_v4}" ] } Nothing happens because all the instances are waiting for all the other instances to finish being created and so nothing is created in the first place. So I need a way for my resources to be created and then run my provisioner "remote-exec" block commands after they are created and terraform can access the IP addresses of all my instances.
The solution is to create a resource "null_resource" "nameYouWant" { } and then run your commands inside that. They will run after the initial resources are created: resource "aws_instance" "consul" { count = 3 ami = "ami-ce5a9fa3" instance_type = "t2.micro" key_name = "ansible_aws" tags { Name = "consul" } } resource "null_resource" "configure-consul-ips" { count = 3 connection { user = "ubuntu" private_key="${file("/home/ubuntu/.ssh/id_rsa")}" agent = true timeout = "3m" } provisioner "remote-exec" { inline = [ "sudo apt-get update", "sudo apt-get install -y curl", "sudo echo '${join("\n", aws_instance.consul.*.private_ip)}' > /home/ubuntu/test.txt" ] } } Also see the answer here: Terraform stalls while trying to get IP addresses of multiple instances? Thank you so much @ydaetskcor for the answer
OpenStack
37,865,979
12
I have no experience in openstack and would appreciate anyone who can help and guide me with this issue. I'm installing openstack in virtual environment (Ubuntu 12.04) and this came out: git clone git//git.openstack.org/openstack/requirements.git/opt/stack/reqiurements Cloning into '/opt/stack/requirements'... fatal:unable to connect to git.openstack.org: git.openstack.org[0: 192.237.223.224]: errno=Connection refused git.openstack.org[1: 2001:4800:7813:516:3bc3:d7f6:ff04:aacb]: errno=Network is unreachable
I had the same problem, the git protocol is blocked in my testing environment. The solution is to modify the sourcerc file in the devstack installation folder to use https instead of git. You have to look for that line and change it. This file is also known as the local.conf file. Default setting in sourcerc file: GIT_BASE=${GIT_BASE:-git://git.openstack.org} Modified setting that should bypass git restrictions: GIT_BASE=${GIT_BASE:-https://git.openstack.org} Simply add this modified line to the local/localrc section of your local.conf file in the DevStack directory and it should use the HTTPS protocol instead of the Git protocol! More info on the local.conf file here - http://devstack.org/configuration.html
OpenStack
20,390,267
11
I am looking into the python shade module in order to automate some tasks using our OpenStack installation. This page instructs: Create a configuration file to store your user name, password, project_name in ~/.config/openstack/clouds.yml. I had a close look; but I couldn't find any information how to provide credentials in a different way; for example as parameters to some objects that I could create within python code. Long story short: is that even possible? Or does this requirement immediately force me "off shade"; and to use the OpenStack python sdk instead?
I am not a python expert, but after some searching how "other" openclient modules do it; maybe the following could work (example code from your link; just a bit of enhancement): from shade import * auth_data = { # URL to the Keystone API endpoint. 'auth_url': 'url', # User credentials. 'user_domain_name': ... } to later do this: cloud = openstack_cloud(cloud='your-cloud', **auth_data)
OpenStack
42,222,387
11
I am dealing with creating AWS API Gateway. I am trying to create CloudWatch Log group and name it API-Gateway-Execution-Logs_${restApiId}/${stageName}. I have no problem in Rest API creation. My issue is in converting restApi.id which is of type pulumi.Outout to string. I have tried these 2 versions which are proposed in their PR#2496 const restApiId = apiGatewayToSqsQueueRestApi.id.apply((v) => `${v}`); const restApiId = pulumi.interpolate `${apiGatewayToSqsQueueRestApi.id}` here is the code where it is used const cloudWatchLogGroup = new aws.cloudwatch.LogGroup( `API-Gateway-Execution-Logs_${restApiId}/${stageName}`, {}, ); stageName is just a string. I have also tried to apply again like const restApiIdStrign = restApiId.apply((v) => v); I always got this error from pulumi up aws:cloudwatch:LogGroup API-Gateway-Execution-Logs_Calling [toString] on an [Output<T>] is not supported. Please help me convert Output to string
@Cameron answered the naming question, I want to answer your question in the title. It's not possible to convert an Output<string> to string, or any Output<T> to T. Output<T> is a container for a future value T which may not be resolved even after the program execution is over. Maybe, your restApiId is generated by AWS at deployment time, so if you run your program in preview, there's no value for restApiId. Output<T> is like a Promise<T> which will be eventually resolved, potentially after some resources are created in the cloud. Therefore, the only operations with Output<T> are: Convert it to another Output<U> with apply(f), where f: T -> U Assign it to an Input<T> to pass it to another resource constructor Export it from the stack Any value manipulation has to happen within an apply call.
Pulumi
62,561,660
18
I don't see any options in the documentation on how to delete imported resources from my stack. If I try to remove the resource's reference from my code I get the following error when running pulumi up: error: Preview failed: refusing to delete protected resource 'urn:pulumi:dev::my-cloud-infrastructure::aws:iam/instanceProfile:InstanceProfile::EC2CodeDeploy'
As answered in the Pulumi Slack community channel, one can use the command: pulumi state delete <urn> This will remove the reference from your state file but not from aws. Also, if the resource is protected you'll first have to unprotect it or run the above command with the flag --force.
Pulumi
66,162,196
16
I'm building a macOS app via Xcode. Every time I build, I get the log output: Metal API Validation Enabled To my knowledge my app is not using any Metal features. I'm not using hardware-accelerated 3D graphics or shaders or video game features or anything like that. Why is Xcode printing Metal API log output? Is Metal being used in my app? Can I or should I disable it? How can I disable this "Metal API Validation Enabled" log message?
Toggle Metal API Validation via your Xcode Scheme: Scheme > Edit Scheme... > Run > Diagnostics > Metal API Validation. It's a checkbox, so the possible options are Enabled or Disabled. Disabling sets the key enableGPUValidationMode = 1 in your .xcscheme file. After disabling, Xcode no longer logs the "Metal API Validation Enabled" log message. Note: In Xcode 11 and below, the option appears in the "Options" tab of the Scheme Editor (instead of the "Diagnostics" tab).
Metal³
60,645,401
40
Task I would like to capture a real-world texture and apply it to a reconstructed mesh produced with a help of LiDAR scanner. I suppose that Projection-View-Model matrices should be used for that. A texture must be made from fixed Point-of-View, for example, from center of a room. However, it would be an ideal solution if we could apply an environmentTexturing data, collected as a cube-map texture in a scene. Look at 3D Scanner App. It's a reference app allowing us to export a model with its texture. I need to capture a texture with one iteration. I do not need to update it in a realtime. I realize that changing PoV leads to a wrong texture's perception, in other words, distortion of a texture. Also I realize that there's a dynamic tesselation in RealityKit and there's an automatic texture mipmapping (texture's resolution depends on a distance it captured from). import RealityKit import ARKit import Metal import ModelIO class ViewController: UIViewController, ARSessionDelegate { @IBOutlet var arView: ARView! override func viewDidLoad() { super.viewDidLoad() arView.session.delegate = self arView.debugOptions.insert(.showSceneUnderstanding) let config = ARWorldTrackingConfiguration() config.sceneReconstruction = .mesh config.environmentTexturing = .automatic arView.session.run(config) } } Question How to capture and apply a real world texture to a reconstructed 3D mesh?
Object Reconstruction 10 October 2023, Apple released iOS Reality Composer 1.6 app that is capable of capturing a real world model's mesh with texture in realtime using the LiDAR scanning process. But at the moment there's still no native programmatic API for that (but we are all looking forward to it). Also, there's a methodology that allows developers to create textured models from series of shots. Photogrammetry Object Capture API, announced at WWDC 2021, provides developers with the long-awaited photogrammetry tool. At the output we get USDZ model with UV-mapped hi-rez texture. To implement Object Capture API you need macOS 12+ and Xcode 13+. To create a USDZ model from a series of shots, submit all taken images to RealityKit's PhotogrammetrySession. Here's a code snippet that spills a light on this process: import RealityKit import Combine let pathToImages = URL(fileURLWithPath: "/path/to/my/images/") let url = URL(fileURLWithPath: "model.usdz") var request = PhotogrammetrySession.Request.modelFile(url: url, detail: .medium) var configuration = PhotogrammetrySession.Configuration() configuration.sampleOverlap = .normal configuration.sampleOrdering = .unordered configuration.featureSensitivity = .normal configuration.isObjectMaskingEnabled = false guard let session = try PhotogrammetrySession(input: pathToImages, configuration: configuration) else { return 
} var subscriptions = Set<AnyCancellable>() session.output.receive(on: DispatchQueue.global()) .sink(receiveCompletion: { _ in // errors }, receiveValue: { _ in // output }) .store(in: &subscriptions) session.process(requests: [request]) You can reconstruct USD and OBJ models with their corresponding UV-mapped textures.
Metal³
63,793,918
32
I want to set a MTLTexture object as the environment map of a scene, as it seems to be possible according to the documentation. I can set the environment map to be a UIImage with the following code: let roomImage = UIImage(named: "room") scene.lightingEnvironment.contents = roomImage This works and I see the reflection of the image on my metallic objects. I tried converting the image to a MTLTexture and setting it as the environment map with the following code: let roomImage = UIImage(named: "room") let loader = MTKTextureLoader(device: MTLCreateSystemDefaultDevice()!) let envMap = try? loader.newTexture(cgImage: (roomImage?.cgImage)!, options: nil) scene.lightingEnvironment.contents = envMap However this does not work and I end up with a blank environment map with no reflection on my objects. Also, instead of setting the options as nil, I tried setting the MTKTextureLoader.Option.textureUsage key with every possible value it can get, but that didn't work either. Edit: You can have a look at the example project in this repo and use it to reproduce this use case.
Lighting SCN Environment with an MTK texture Using Xcode 13.3.1 on macOS 12.3.1 for iOS 15.4 app. The trick is, the environment lighting requires a cube texture, not a flat image. Create 6 square images for MetalKit cube texture in Xcode Assets folder create Cube Texture Set place textures to their corresponding slots mirror images horizontally and vertically, if needed Paste the code: import ARKit import MetalKit class ViewController: UIViewController { @IBOutlet var sceneView: ARSCNView! override func viewDidLoad() { super.viewDidLoad() let scene = SCNScene() let imageName = "CubeTextureSet" let textureLoader = MTKTextureLoader(device: sceneView.device!) let environmentMap = try! textureLoader.newTexture(name: imageName, scaleFactor: 2, bundle: .main, options: nil) let daeScene = SCNScene(named: "art.scnassets/testCube.dae")! let model = daeScene.rootNode.childNode(withName: "polyCube", recursively: true)! scene.lightingEnvironment.contents = environmentMap scene.lightingEnvironment.intensity = 2.5 scene.background.contents = environmentMap sceneView.scene = scene sceneView.allowsCameraControl = true scene.rootNode.addChildNode(model) } } Apply metallic materials to models. Now MTL environment lighting is On. If you need a procedural skybox texture – use MDLSkyCubeTexture class. Also, this post may be useful for you.
Metal³
47,739,214
31
I'm creating a MTLTexture from CVImageBuffers (from camera and players) using CVMetalTextureCacheCreateTextureFromImage to get a CVMetalTexture and then CVMetalTextureGetTexture to get the MTLTexture. The problem I'm seeing is that when I later render the texture using Metal, I occasionally see video frames rendered out of order (visually it stutters back and forth in time), presumably because CoreVideo is modifying the underlying CVImageBuffer storage and the MTLTexture is just pointing there. Is there any way to make CoreVideo not touch that buffer and use another one from its pool until I release the MTLTexture object? My current workaround is blitting the texture using a MTLBlitCommandEncoder but since I just need to hold on to the texture for ~30 milliseconds that seems unnecessary.
I recently ran into this exact same issue. The problem is that the MTLTexture is not valid unless it's owning CVMetalTextureRef is still alive. You must keep a reference to the CVMetalTextureRef the entire time you're using the MTLTexture (all the way until the end of the current rendering cycle).
Metal³
43,550,769
21
I try to check out the new Samples from the new Metal API for iOS. When i download the code an open it in the XCode 6 Beta I'm getting the following error message: QuartzCore/CAMetalLayer.h file not found Do i need to add some other files or am I missing something else? The Metal API should be available in OSX 10.9.3. Is there any need to upgrade to Yosemite 10.10 Beta to run these examples?
Reason behind this error is it only works in device having A7+ chip, simulator will not work for this.
Metal³
24,145,101
20
What is the most efficient way to capture frames from a MTKView? If possible, I would like to save a .mov file from the frames in realtime. Is it possible to render into an AVPlayer frame or something? It is currently drawing with this code (based on @warrenm PerformanceShaders project): func draw(in view: MTKView) { _ = inflightSemaphore.wait(timeout: DispatchTime.distantFuture) updateBuffers() let commandBuffer = commandQueue.makeCommandBuffer() commandBuffer.addCompletedHandler{ [weak self] commandBuffer in if let strongSelf = self { strongSelf.inflightSemaphore.signal() } } // Dispatch the current kernel to perform the selected image filter selectedKernel.encode(commandBuffer: commandBuffer, sourceTexture: kernelSourceTexture!, destinationTexture: kernelDestTexture!) if let renderPassDescriptor = view.currentRenderPassDescriptor, let currentDrawable = view.currentDrawable { let clearColor = MTLClearColor(red: 0, green: 0, blue: 0, alpha: 1) renderPassDescriptor.colorAttachments[0].clearColor = clearColor let renderEncoder = commandBuffer.makeRenderCommandEncoder(descriptor: renderPassDescriptor) renderEncoder.label = "Main pass" renderEncoder.pushDebugGroup("Draw textured square") renderEncoder.setFrontFacing(.counterClockwise) renderEncoder.setCullMode(.back) renderEncoder.setRenderPipelineState(pipelineState) renderEncoder.setVertexBuffer(vertexBuffer, offset: MBEVertexDataSize * bufferIndex, at: 0) renderEncoder.setVertexBuffer(uniformBuffer, offset: MBEUniformDataSize * bufferIndex , at: 1) renderEncoder.setFragmentTexture(kernelDestTexture, at: 0) renderEncoder.setFragmentSamplerState(sampler, at: 0) renderEncoder.drawPrimitives(type: .triangleStrip, vertexStart: 0, vertexCount: 4) renderEncoder.popDebugGroup() renderEncoder.endEncoding() commandBuffer.present(currentDrawable) } bufferIndex = (bufferIndex + 1) % MBEMaxInflightBuffers commandBuffer.commit() }
Here's a small class that performs the essential functions of writing out a movie file that captures the contents of a Metal view: class MetalVideoRecorder { var isRecording = false var recordingStartTime = TimeInterval(0) private var assetWriter: AVAssetWriter private var assetWriterVideoInput: AVAssetWriterInput private var assetWriterPixelBufferInput: AVAssetWriterInputPixelBufferAdaptor init?(outputURL url: URL, size: CGSize) { do { assetWriter = try AVAssetWriter(outputURL: url, fileType: .m4v) } catch { return nil } let outputSettings: [String: Any] = [ AVVideoCodecKey : AVVideoCodecType.h264, AVVideoWidthKey : size.width, AVVideoHeightKey : size.height ] assetWriterVideoInput = AVAssetWriterInput(mediaType: .video, outputSettings: outputSettings) assetWriterVideoInput.expectsMediaDataInRealTime = true let sourcePixelBufferAttributes: [String: Any] = [ kCVPixelBufferPixelFormatTypeKey as String : kCVPixelFormatType_32BGRA, kCVPixelBufferWidthKey as String : size.width, kCVPixelBufferHeightKey as String : size.height ] assetWriterPixelBufferInput = AVAssetWriterInputPixelBufferAdaptor(assetWriterInput: assetWriterVideoInput, sourcePixelBufferAttributes: sourcePixelBufferAttributes) assetWriter.add(assetWriterVideoInput) } func startRecording() { assetWriter.startWriting() assetWriter.startSession(atSourceTime: .zero) recordingStartTime = CACurrentMediaTime() isRecording = true } func endRecording(_ completionHandler: @escaping () -> ()) { isRecording = false assetWriterVideoInput.markAsFinished() assetWriter.finishWriting(completionHandler: completionHandler) } func writeFrame(forTexture texture: MTLTexture) { if !isRecording { return } while !assetWriterVideoInput.isReadyForMoreMediaData {} guard let pixelBufferPool = assetWriterPixelBufferInput.pixelBufferPool else { print("Pixel buffer asset writer input did not have a pixel buffer pool available; cannot retrieve frame") return } var maybePixelBuffer: CVPixelBuffer? = nil let status = CVPixelBufferPoolCreatePixelBuffer(nil, pixelBufferPool, &maybePixelBuffer) if status != kCVReturnSuccess { print("Could not get pixel buffer from asset writer input; dropping frame...") return } guard let pixelBuffer = maybePixelBuffer else { return } CVPixelBufferLockBaseAddress(pixelBuffer, []) let pixelBufferBytes = CVPixelBufferGetBaseAddress(pixelBuffer)! // Use the bytes per row value from the pixel buffer since its stride may be rounded up to be 16-byte aligned let bytesPerRow = CVPixelBufferGetBytesPerRow(pixelBuffer) let region = MTLRegionMake2D(0, 0, texture.width, texture.height) texture.getBytes(pixelBufferBytes, bytesPerRow: bytesPerRow, from: region, mipmapLevel: 0) let frameTime = CACurrentMediaTime() - recordingStartTime let presentationTime = CMTimeMakeWithSeconds(frameTime, preferredTimescale: 240) assetWriterPixelBufferInput.append(pixelBuffer, withPresentationTime: presentationTime) CVPixelBufferUnlockBaseAddress(pixelBuffer, []) } } After initializing one of these and calling startRecording(), you can add a scheduled handler to the command buffer containing your rendering commands and call writeFrame (after you end encoding, but before presenting the drawable or committing the buffer): let texture = currentDrawable.texture commandBuffer.addCompletedHandler { commandBuffer in self.recorder.writeFrame(forTexture: texture) } When you're done recording, just call endRecording, and the video file will be finalized and closed. Caveats: This class assumes the source texture to be of the default format, .bgra8Unorm. If it isn't, you'll get crashes or corruption. If necessary, convert the texture with a compute or fragment shader, or use Accelerate. This class also assumes that the texture is the same size as the video frame. If this isn't the case (if the drawable size changes, or your screen autorotates), the output will be corrupted and you may see crashes. Mitigate this by scaling or cropping the source texture as your application requires.
Metal³
43,838,089
20
I've been learning Metal for iOS / OSX, and I began by following a Ray Wenderlich tutorial. This tutorial works fine but it makes no mention of MTLVertexAttributeDescriptors. Now that I'm developing my own app, I'm getting weird glitches and I'm wondering if the fact that I don't use MTLVertexAttributeDescriptors may be related to the problem. What difference do they make? I've been able to make a variety of shaders with varying vertex structures and I never even knew about these things. I know you use them to describe the layout of vertex components for use in a shader. For example a shader might use this structure for vertices, and it would be set up in a vertex descriptor in the function below. typedef struct { float3 position [[attribute(T_VertexAttributePosition)]]; float2 texCoord [[attribute(T_VertexAttributeTexcoord)]]; } Vertex; class func buildMetalVertexDescriptor() -> MTLVertexDescriptor { let mtlVertexDescriptor = MTLVertexDescriptor() mtlVertexDescriptor.attributes[T_VertexAttribute.position.rawValue].format = MTLVertexFormat.float3 mtlVertexDescriptor.attributes[T_VertexAttribute.position.rawValue].offset = 0 mtlVertexDescriptor.attributes[T_VertexAttribute.position.rawValue].bufferIndex = T_BufferIndex.meshPositions.rawValue mtlVertexDescriptor.attributes[T_VertexAttribute.texcoord.rawValue].format = MTLVertexFormat.float2 mtlVertexDescriptor.attributes[T_VertexAttribute.texcoord.rawValue].offset = 0 mtlVertexDescriptor.attributes[T_VertexAttribute.texcoord.rawValue].bufferIndex = T_BufferIndex.meshGenerics.rawValue mtlVertexDescriptor.layouts[T_BufferIndex.meshPositions.rawValue].stride = 12 mtlVertexDescriptor.layouts[T_BufferIndex.meshPositions.rawValue].stepRate = 1 mtlVertexDescriptor.layouts[T_BufferIndex.meshPositions.rawValue].stepFunction = MTLVertexStepFunction.perVertex mtlVertexDescriptor.layouts[T_BufferIndex.meshGenerics.rawValue].stride = 8 mtlVertexDescriptor.layouts[T_BufferIndex.meshGenerics.rawValue].stepRate = 1 mtlVertexDescriptor.layouts[T_BufferIndex.meshGenerics.rawValue].stepFunction = MTLVertexStepFunction.perVertex return mtlVertexDescriptor } But even without the MTLVertexDescriptor setup, the shader can already access the vertex buffer and the position / texCoord components of vertices in the array. Just by setting the vertex buffer, the shader has access to all of the components. So what good does the descriptor do?
There are, of course, multiple ways of doing things. The vertex descriptor is only used for one of them. For example, a vertex function might be declared like this: vertex MyVertexOut vertex_func(device const float3 *positions [[buffer(0)]], device const float2 *texCoords [[buffer(1)]], uint vid [[vertex_id]]) { // use positions[vid] and texCoords[vid] to fill in and return a MyVertexOut structure } This dictates that the vertex attributes be supplied in separate buffers, each of a specific layout. You could also do: struct MyVertexIn { float3 position; float2 texCoord; }; vertex MyVertexOut vertex_func(device const MyVertexIn *vertexes [[buffer(0)]], uint vid [[vertex_id]]) { // use vertexes[vid].position and vertexes[vid].texCoord to fill in and return a MyVertexOut structure } This dictates that the vertex attributes be supplied in a single buffer of structs matching the layout of MyVertexIn. Neither of the above require or make use of the vertex descriptor. It's completely irrelevant. However, you can also do this: struct MyVertexIn { float3 position [[attribute(0)]]; float2 texCoord [[attribute(1)]]; }; vertex MyVertexOut vertex_func(MyVertexIn vertex [[stage_in]]) { // use vertex.position and vertex.texCoord to fill in and return a MyVertexOut structure } Note the use of the attribute(n) and stage_in attributes. This does not dictate how the vertex attributes are supplied. Rather, the vertex descriptor describes a mapping from one or more buffers to the vertex attributes. The mapping can also perform conversions and expansions. For example, the shader code above specifies that the position field is a float3 but the buffers may contain (and be described as containing) half3 values (or various other types) and Metal will do the conversion automatically. The same shader can be used with different vertex descriptors and, thus, different distribution of vertex attributes across buffers. That provides flexibility for different scenarios, some where the vertex attributes are separated out into different buffers (similar to the first example I gave) and others where they're interleaved in the same buffer (similar to the second example). Etc. If you don't need that flexibility and the extra level of abstraction, then you don't need to deal with vertex descriptors. They're there for those who do need them.
Metal³
47,044,663
20
I am trying to create a framework that works with METAL Api (iOS). I am pretty new to this platform and I would like to know how to build the framework to work with .metal files (I am building a static lib, not dynamic). Should they be a part of the .a file, or as a resource files in the framework bundle? Or is there an other way to do that? Thanks. Update: For those who tackle this - I ended up following warrenm's 1's suggested option - converted the .metal file into a string and calling newLibraryWithSource:options:error:. Although it is not the best in performance it allowed me to ship only one framework file, without additional resources to import. That could be useful to whoever creating framework that uses Metal, ARKit, etc with shader files.
There are many ways to provide Metal shaders with a static library, all with different tradeoffs. I'll try to enumerate them here. 1) Transform your .metal files into static strings that are baked into your static library. This is probably the worst option. The idea is that you preprocess your Metal shader code into strings which are included as string literals in your static library. You would then use the newLibraryWithSource:options:error: API (or its asynchronous sibling) to turn the source into an MTLLibrary and retrieve the functions. This requires you to devise a process for doing the .metal-to-string conversion, and you lose the benefit of shader pre-compilation, making the resulting application slower. 2) Ship .metal files alongside your static library and require library users to add them to their app target All things considered, this is a decent option, though it places more of a burden on your users and exposes your Metal shader source (if that's a concern). Code in your static library can use the "default library" (newDefaultLibrary), since the code will be compiled automatically by Xcode into the app's default.metallib, which is embedded in the app bundle as a resource. 3) Ship a .metallib file alongside your static library This is a good middle ground between ease-of-use, performance, and security (since it doesn't expose your shader source, only its IR). Basically, you can create a "Metal Library" target in your project, into which you put your shader code. This will produce a .metallib file, which you can ship along with your static library and have your user embed as a resource in their app target. Your static library can load the .metallib at runtime with the newLibraryWithData:error: or newLibraryWithURL:error: API. Since your shaders will be pre-compiled, creating libraries will be faster, and you'll keep the benefit of compile-time diagnostics.
Metal³
46,742,403
19
I have an image that I generate programmatically and I want to send this image as a texture to a compute shader. The way I generate this image is that I calculate each of the RGBA components as UInt8 values, and combine them into a UInt32 and store it in the buffer of the image. I do this with the following piece of code: guard let cgContext = CGContext(data: nil, width: width, height: height, bitsPerComponent: 8, bytesPerRow: 0, space: CGColorSpaceCreateDeviceRGB(), bitmapInfo: RGBA32.bitmapInfo) else { print("Unable to create CGContext") return } guard let buffer = cgContext.data else { print("Unable to create textures") return } let pixelBuffer = buffer.bindMemory(to: RGBA32.self, capacity: width * height) let heightFloat = Float(height) let widthFloat = Float(width) for i in 0 ..< height { let latitude = Float(i + 1) / heightFloat for j in 0 ..< width { let longitude = Float(j + 1) / widthFloat let x = UInt8(((sin(longitude * Float.pi * 2) * cos(latitude * Float.pi) + 1) / 2) * 255) let y = UInt8(((sin(longitude * Float.pi * 2) * sin(latitude * Float.pi) + 1) / 2) * 255) let z = UInt8(((cos(latitude * Float.pi) + 1) / 2) * 255) let offset = width * i + j pixelBuffer[offset] = RGBA32(red: x, green: y, blue: z, alpha: 255) } } let coordinateConversionImage = cgContext.makeImage() where RGBA32 is a little struct that does the shifting and creating the UInt32 value. This image turns out fine as I can convert it to UIImage and save it to my photos library. The problem arises when I try to send this image as a texture to a compute shader. Below is my shader code: kernel void updateEnvironmentMap(texture2d<uint, access::read> currentFrameTexture [[texture(0)]], texture2d<uint, access::read> coordinateConversionTexture [[texture(1)]], texture2d<uint, access::write> environmentMap [[texture(2)]] uint2 gid [[thread_position_in_grid]]) { const uint4 pixel = {255, 127, 63, 255}; environmentMap.write(pixel, gid); } The problem with this code is that the type of my textures is uint, which is 32-bits, and I want to generate 32-bit pixels the same way I do on the CPU, by appending 4 8-bit values. However, I can't seem to do that on Metal as there is no byte type that I can just append together and make up a uint32. So, my question is, what is the correct way to handle 2D textures and set 32-bit pixels on a Metal compute shader? Bonus question: Also, I've seen example shader codes with texture2d<float, access::read> as the input texture type. I'm assuming it represents a value between 0.0 and 1.0 but what advantage that does that have over an unsigned int with values between 0 and 255? Edit: To clarify, the output texture of the shader, environmentMap, has the exact same properties (width, height, pixelFormat, etc.) as the input textures. Why I think this is counter intuitive is that we are setting a uint4 as a pixel, which means it's composed of 4 32-bit values, whereas each pixel should be 32-bits. With this current code, {255, 127, 63, 255} has the exact same result as {2550, 127, 63, 255}, meaning the values somehow get clamped between 0-255 before being written to the output texture. But this is extremely counter-intuitive.
There's a bit more magic at play than you seem to be familiar with, so I'll try to elucidate. First of all, by design, there is a loose connection between the storage format of textures in Metal and the type you get when you read/sample. You can have a texture in .bgra8Unorm format that, when sampled through a texture bound as texture2d<float, access::sample> will give you a float4 with its components in RGBA order. The conversion from those packed bytes to the float vector with swizzled components follows well-documented conversion rules as specified in the Metal Shading Language Specification. It is also the case that, when writing to a texture whose storage is (for example) 8 bits per component, values will be clamped to fit in the underlying storage format. This is further affected by whether or not the texture is a norm type: if the format contains norm, the values are interpreted as if they specified a value between 0 and 1. Otherwise, the values you read are not normalized. An example: if a texture is .bgra8Unorm and a given pixel contains the byte values [0, 64, 128, 255], then when read in a shader that requests float components, you will get [0.5, 0.25, 0, 1.0] when you sample it. By contrast, if the format is .rgba8Uint, you will get [0, 64, 128, 255]. The storage format of the texture has a prevailing effect on how its contents get interpreted upon sampling. I assume that the pixel format of your texture is something like .rgba8Unorm. If that's the case, you can achieve what you want by writing your kernel like this: kernel void updateEnvironmentMap(texture2d<float, access::read> currentFrameTexture [[texture(0)]], texture2d<float, access::read> coordinateConversionTexture [[texture(1)]], texture2d<float, access::write> environmentMap [[texture(2)]] uint2 gid [[thread_position_in_grid]]) { const float4 pixel(255, 127, 63, 255); environmentMap.write(pixel * (1 / 255.0), gid); } By contrast, if your texture has a format of .rgba8Uint, you'll get the same effect by writing it like this: kernel void updateEnvironmentMap(texture2d<float, access::read> currentFrameTexture [[texture(0)]], texture2d<float, access::read> coordinateConversionTexture [[texture(1)]], texture2d<float, access::write> environmentMap [[texture(2)]] uint2 gid [[thread_position_in_grid]]) { const float4 pixel(255, 127, 63, 255); environmentMap.write(pixel, gid); } I understand that this is a toy example, but I hope that with the foregoing information, you can figure out how to correctly store and sample values to achieve what you want.
Metal³
47,738,441
18
import UIKit import Metal import QuartzCore class ViewController: UIViewController { var device: MTLDevice! = nil var metalLayer: CAMetalLayer! = nil override func viewDidLoad() { super.viewDidLoad() // Do any additional setup after loading the view, typically from a nib. device = MTLCreateSystemDefaultDevice() metalLayer = CAMetalLayer() // 1 metalLayer.device = device // 2 metalLayer.pixelFormat = .BGRA8Unorm // 3 metalLayer.framebufferOnly = true // 4 metalLayer.frame = view.layer.frame // 5 view.layer.addSublayer(metalLayer) // 6 } override func didReceiveMemoryWarning() { super.didReceiveMemoryWarning() // Dispose of any resources that can be recreated. } } When I have this in my ViewController.swift, I get the error "Use of undeclared type CAMetalLayer" even though I've imported Metal and QuartzCore. How can I get this code to work?
UPDATE: Simulator support is coming this year (2019) Pre Xcode 11/iOS 13: Metal code doesn't compile on the Simulator. Try compiling for a device.
Metal³
32,917,630
17
I tried using Metal in a simple app but when I call the device.newDefaultLibrary() function then I get an error in runtime: /BuildRoot/Library/Caches/com.apple.xbs/Sources/Metal/Metal-56.7/Framework/MTLLibrary.mm:1842: failed assertion `Metal default library not found' Has anyone any idea what cloud be the problem? I followed this tutorial. The code is a little old but with tiny changes it work. Here is my ViewController code: import UIKit import Metal import QuartzCore class ViewController: UIViewController { //11A var device: MTLDevice! = nil //11B var metalLayer: CAMetalLayer! = nil //11C let vertexData:[Float] = [ 0.0, 1.0, 0.0, -1.0, -1.0, 0.0, 1.0, -1.0, 0.0] var vertexBuffer: MTLBuffer! = nil //11F var pipelineState: MTLRenderPipelineState! = nil //11G var commandQueue: MTLCommandQueue! = nil //12A var timer: CADisplayLink! = nil override func viewDidLoad() { super.viewDidLoad() // Do any additional setup after loading the view, typically from a nib. //11A device = MTLCreateSystemDefaultDevice() //11B metalLayer = CAMetalLayer() // 1 metalLayer.device = device // 2 metalLayer.pixelFormat = .BGRA8Unorm // 3 metalLayer.framebufferOnly = true // 4 metalLayer.frame = view.layer.frame // 5 view.layer.addSublayer(metalLayer) // 6 //11C let dataSize = vertexData.count * sizeofValue(vertexData[0]) // 1 vertexBuffer = device.newBufferWithBytes(vertexData, length: dataSize, options: MTLResourceOptions.CPUCacheModeDefaultCache) // 2 //11F // 1 let defaultLibrary = device.newDefaultLibrary() //The error is generating here let fragmentProgram = defaultLibrary!.newFunctionWithName("basic_fragment") let vertexProgram = defaultLibrary!.newFunctionWithName("basic_vertex") // 2 let pipelineStateDescriptor = MTLRenderPipelineDescriptor() pipelineStateDescriptor.vertexFunction = vertexProgram pipelineStateDescriptor.fragmentFunction = fragmentProgram pipelineStateDescriptor.colorAttachments[0].pixelFormat = .BGRA8Unorm // 3 do { try pipelineState = device.newRenderPipelineStateWithDescriptor(pipelineStateDescriptor) } catch _ { print("Failed to create pipeline state, error") } //11G commandQueue = device.newCommandQueue() //12A timer = CADisplayLink(target: self, selector: Selector("gameloop")) timer.addToRunLoop(NSRunLoop.mainRunLoop(), forMode: NSDefaultRunLoopMode) } override func didReceiveMemoryWarning() { super.didReceiveMemoryWarning() // Dispose of any resources that can be recreated. } //MARK: Custom Methodes //12A func render() { //12C let commandBuffer = commandQueue.commandBuffer() //12B let drawable = metalLayer.nextDrawable() let renderPassDescriptor = MTLRenderPassDescriptor() renderPassDescriptor.colorAttachments[0].texture = drawable!.texture renderPassDescriptor.colorAttachments[0].loadAction = .Clear renderPassDescriptor.colorAttachments[0].clearColor = MTLClearColor(red: 0.0, green: 104.0/255.0, blue: 5.0/255.0, alpha: 1.0) //12D let renderEncoderOpt = commandBuffer.renderCommandEncoderWithDescriptor(renderPassDescriptor) renderEncoderOpt.setRenderPipelineState(pipelineState) renderEncoderOpt.setVertexBuffer(vertexBuffer, offset: 0, atIndex: 0) renderEncoderOpt.drawPrimitives(.Triangle, vertexStart: 0, vertexCount: 3, instanceCount: 1) renderEncoderOpt.endEncoding() //12E commandBuffer.presentDrawable(drawable!) commandBuffer.commit() } func gameloop() { autoreleasepool { self.render() } } } I use an iPhone 5s device with iOS 9.3 for testing.
The default library is only included in your app when you have at least one .metal file in your app target's Compile Sources build phase. I assume you've followed the steps of the tutorial where you created the Metal shader source file and added the vertex and fragment functions, so you simply need to use the + icon in the build phases setting to add that file to your compilation phase:
Metal³
36,204,360
16
I'm doing realtime video processing on iOS at 120 fps and want to first preprocess image on GPU (downsample, convert color, etc. that are not fast enough on CPU) and later postprocess frame on CPU using OpenCV. What's the fastest way to share camera feed between GPU and CPU using Metal? In other words the pipe would look like: CMSampleBufferRef -> MTLTexture or MTLBuffer -> OpenCV Mat I'm converting CMSampleBufferRef -> MTLTexture the following way CVPixelBufferRef pixelBuffer = CMSampleBufferGetImageBuffer(sampleBuffer); // textureRGBA { size_t width = CVPixelBufferGetWidth(pixelBuffer); size_t height = CVPixelBufferGetHeight(pixelBuffer); MTLPixelFormat pixelFormat = MTLPixelFormatBGRA8Unorm; CVMetalTextureRef texture = NULL; CVReturn status = CVMetalTextureCacheCreateTextureFromImage(NULL, _textureCache, pixelBuffer, NULL, pixelFormat, width, height, 0, &texture); if(status == kCVReturnSuccess) { textureBGRA = CVMetalTextureGetTexture(texture); CFRelease(texture); } } After my metal shader is finised I convert MTLTexture to OpenCV cv::Mat image; ... CGSize imageSize = CGSizeMake(drawable.texture.width, drawable.texture.height); int imageByteCount = int(imageSize.width * imageSize.height * 4); int mbytesPerRow = 4 * int(imageSize.width); MTLRegion region = MTLRegionMake2D(0, 0, int(imageSize.width), int(imageSize.height)); CGSize resSize = CGSizeMake(drawable.texture.width, drawable.texture.height); [drawable.texture getBytes:image.data bytesPerRow:mbytesPerRow fromRegion:region mipmapLevel:0]; Some observations: 1) Unfortunately MTLTexture.getBytes seems expensive (copying data from GPU to CPU?) and takes around 5ms on my iphone 5S which is too much when processing at ~100fps 2) I noticed some people use MTLBuffer instead of MTLTexture with the following method: metalDevice.newBufferWithLength(byteCount, options: .StorageModeShared) (see: Memory write performance - GPU CPU Shared Memory) However CMSampleBufferRef and accompanying CVPixelBufferRef is managed by CoreVideo is guess.
The fastest way to do this is to use a MTLTexture backed by a MTLBuffer; it is a special kind of MTLTexture that shares memory with a MTLBuffer. However, your C processing (openCV) will be running a frame or two behind, this is unavoidable as you need to submit the commands to the GPU (encoding) and the GPU needs to render it, if you use waitUntilCompleted to make sure the GPU is finished that just chews up the CPU and is wasteful. So the process would be: first you create the MTLBuffer then you use the MTLBuffer method "newTextureWithDescriptor:offset:bytesPerRow:" to create the special MTLTexture. You need to create the special MTLTexture beforehand (as an instance variable), then you need to setup up a standard rendering pipeline (faster than using compute shaders) that will take the MTLTexture created from the CMSampleBufferRef and pass this into your special MTLTexture, in that pass you can downscale and do any colour conversion as necessary in one pass. Then you submit the command buffer to the gpu, in a subsequent pass you can just call [theMTLbuffer contents] to grab the pointer to the bytes that back your special MTLTexture for use in openCV. Any technique that forces a halt in the CPU/GPU behaviour will never be efficient as half the time will be spent waiting i.e. the CPU waits for the GPU to finish and the GPU has to wait also for the next encodings (when the GPU is working you want the CPU to be encoding the next frame and doing any openCV work rather than waiting for the GPU to finish). Also, when people normally refer to real-time processing they usually are referring to some processing with real-time feedback (visual), all modern iOS devices from the 4s and above have a 60Hz screen refresh rate, so any feedback presented faster than that is pointless but if you need 2 frames (at 120Hz) to make 1 (at 60Hz) then you have to have a custom timer or modify CADisplayLink.
Metal³
37,639,271
16
When working with Metal, I find there's a bewildering number of types and it's not always clear to me which type I should be using in which context. In Apple's Metal Shading Language Specification, there's a pretty clear table of which types are supported within a Metal shader file. However, there's plenty of sample code available that seems to use additional types that are part of SIMD. On the macOS (Objective-C) side of things, the Metal types are not available but the SIMD ones are and I'm not sure which ones I'm supposed to be used. For example: In the Metal Spec, there's float2 that is described as a "vector" data type representing two floating components. On the app side, the following all seem to be used or represented in some capacity: float2, which is typedef ::simd_float2 float2 in vector_types.h Noted: "In C or Objective-C, this type is available as simd_float2." vector_float2, which is typedef simd_float2 vector_float2 Noted: "This type is deprecated; you should use simd_float2 or simd::float2 instead" simd_float2, which is typedef __attribute__((__ext_vector_type__(2))) float simd_float2 ::simd_float2 and simd::float2 ? A similar situation exists for matrix types: matrix_float4x4, simd_float4x4, ::simd_float4x4 and float4x4, Could someone please shed some light on why there are so many typedefs with seemingly overlapping functionality? If you were writing a new application today (2018) in Objective-C / Objective-C++, which type should you use to represent two floating values (x/y) and which type for matrix transforms that can be shared between app code and Metal?
The types with vector_ and matrix_ prefixes have been deprecated in favor of those with the simd_ prefix, so the general guidance (using float4 as an example) would be: In C code, use the simd_float4 type. (You have to include the prefix unless you provide your own typedef, since C doesn't have namespaces.) Same for Objective-C. In C++ code, use the simd::float4 type, which you can shorten to float4 by using namespace simd;. Same for Objective-C++. In Metal code, use the float4 type, since float4 is a fundamental type in the Metal Shading Language [1]. In Swift code, use the float4 type, since the simd_ types are typealiased to shorter names. Update: In Swift 5, float4 and related types have been deprecated in favor of SIMD4<Float> and related types. These types are all fundamentally equivalent, and all have the same size and alignment characteristics so you can use them across languages. That is, in fact, one of the design goals of the simd framework. I'll leave a discussion of packed types to another day, since you didn't ask. [1] Metal is an unusual case since it defines float4 in the global namespace, then imports it into the metal namespace, which is also exported as the simd namespace. It additionally aliases float4 as vector_float4. So, you can use any of the above names for this vector type (except simd_float4). Prefer float4.
Metal³
51,790,490
16
On 18th May 2022, PyTorch announced support for GPU-accelerated PyTorch training on Mac. I followed the following process to set up PyTorch on my Macbook Air M1 (using miniconda). conda create -n torch-nightly python=3.8 $ conda activate torch-nightly $ pip install --pre torch torchvision torchaudio --extra-index-url https://download.pytorch.org/whl/nightly/cpu I am trying to execute a script from Udacity's Deep Learning Course available here. The script moves the models to GPU using the following code: G.cuda() D.cuda() However, this will not work on M1 chips, since there is no CUDA. If we want to move models to M1 GPU and our tensors to M1 GPU, and train entirely on M1 GPU, what should we be doing? If Relevant: G and D are Discriminator and Generators for GAN's. class Discriminator(nn.Module): def __init__(self, conv_dim=32): super(Discriminator, self).__init__() self.conv_dim = conv_dim # complete init function self.cv1 = conv(in_channels=3, out_channels=conv_dim, kernel_size=4, stride=2, padding=1, batch_norm=False) # 32*32*3 -> 16*16*32 self.cv2 = conv(in_channels=conv_dim, out_channels=conv_dim*2, kernel_size=4, stride=2, padding=1, batch_norm=True) # 16*16*32 -> 8*8*64 self.cv3 = conv(in_channels=conv_dim*2, out_channels=conv_dim*4, kernel_size=4, stride=2, padding=1, batch_norm=True) # 8*8*64 -> 4*4*128 self.fc1 = nn.Linear(in_features = 4*4*conv_dim*4, out_features = 1, bias=True) def forward(self, x): # complete forward function out = F.leaky_relu(self.cv1(x), 0.2) out = F.leaky_relu(self.cv2(x), 0.2) out = F.leaky_relu(self.cv3(x), 0.2) out = out.view(-1, 4*4*conv_dim*4) out = self.fc1(out) return out D = Discriminator(conv_dim) class Generator(nn.Module): def __init__(self, z_size, conv_dim=32): super(Generator, self).__init__() self.conv_dim = conv_dim self.z_size = z_size # complete init function self.fc1 = nn.Linear(in_features = z_size, out_features = 4*4*conv_dim*4) self.dc1 = deconv(in_channels = conv_dim*4, out_channels = conv_dim*2, kernel_size=4, stride=2, padding=1, batch_norm=True) self.dc2 = deconv(in_channels = conv_dim*2, out_channels = conv_dim, kernel_size=4, stride=2, padding=1, batch_norm=True) self.dc3 = deconv(in_channels = conv_dim, out_channels = 3, kernel_size=4, stride=2, padding=1, batch_norm=False) def forward(self, x): # complete forward function x = self.fc1(x) x = x.view(-1, conv_dim*4, 4, 4) x = F.relu(self.dc1(x)) x = F.relu(self.dc2(x)) x = F.tanh(self.dc3(x)) return x G = Generator(z_size=z_size, conv_dim=conv_dim)
This is what I used: if torch.backends.mps.is_available(): mps_device = torch.device("mps") G.to(mps_device) D.to(mps_device) Similarly for all tensors that I want to move to M1 GPU, I used: tensor_ = tensor_(mps_device) Some operations are ot yet implemented using MPS, and we might need to set a few environment variables to use CPU fall back instead: One error that I faced during executing the script was # NotImplementedError: The operator 'aten::_slow_conv2d_forward' is not current implemented for the MPS device. If you want this op to be added in priority during the prototype phase of this feature, please comment on https://github.com/pytorch/pytorch/issues/77764. As a temporary fix, you can set the environment variable `PYTORCH_ENABLE_MPS_FALLBACK=1` to use the CPU as a fallback for this op. WARNING: this will be slower than running natively on MPS. To solve it I set the environment variable PYTORCH_ENABLE_MPS_FALLBACK=1 conda env config vars set PYTORCH_ENABLE_MPS_FALLBACK=1 conda activate <test-env> References: https://pytorch.org/blog/introducing-accelerated-pytorch-training-on-mac/ https://pytorch.org/docs/master/notes/mps.html https://sebastianraschka.com/blog/2022/pytorch-m1-gpu.html https://sebastianraschka.com/blog/2022/pytorch-m1-gpu.html https://docs.conda.io/projects/conda/en/latest/user-guide/tasks/manage-environments.html#setting-environment-variables
Metal³
72,416,726
15
I'd like to build a dissolve in effect for a Scenekit game. I've been looking into shader modifiers since they seem to be the most light weight and haven't had any luck in replicating this effect: Is it possible to use shader modifiers to create this effect? How would you go about implementing one?
You can get pretty close to the intended effect with a fragment shader modifier. The basic approach is as follows: Sample from a noise texture If the noise sample is below a certain threshold (which I call "revealage"), discard it, making it fully transparent Otherwise, if the fragment is close to the edge, replace its color with your preferred edge color (or gradient) Apply bloom to make the edges glow Here's the shader modifier code for doing this: #pragma arguments float revealage; texture2d<float, access::sample> noiseTexture; #pragma transparent #pragma body const float edgeWidth = 0.02; const float edgeBrightness = 2; const float3 innerColor = float3(0.4, 0.8, 1); const float3 outerColor = float3(0, 0.5, 1); const float noiseScale = 3; constexpr sampler noiseSampler(filter::linear, address::repeat); float2 noiseCoords = noiseScale * _surface.ambientTexcoord; float noiseValue = noiseTexture.sample(noiseSampler, noiseCoords).r; if (noiseValue > revealage) { discard_fragment(); } float edgeDist = revealage - noiseValue; if (edgeDist < edgeWidth) { float t = edgeDist / edgeWidth; float3 edgeColor = edgeBrightness * mix(outerColor, innerColor, t); _output.color.rgb = edgeColor; } Notice that the revealage parameter is exposed as a material parameter, since you might want to animate it. There are other internal constants, such as edge width and noise scale that can be fine-tuned to get the desired effect with your content. Different noise textures produce different dissolve effects, so you can experiment with that as well. I just used this multioctave value noise image: Load the image as a UIImage or NSImage and set it on the material property that gets exposed as noiseTexture: material.setValue(SCNMaterialProperty(contents: noiseImage), forKey: "noiseTexture") You'll need to add bloom as a post-process to get that glowy, e-wire effect. In SceneKit, this is as simple as enabling the HDR pipeline and setting some parameters: let camera = SCNCamera() camera.wantsHDR = true camera.bloomThreshold = 0.8 camera.bloomIntensity = 2 camera.bloomBlurRadius = 16.0 camera.wantsExposureAdaptation = false All of the numeric parameters will potentially need to be tuned to your content. To keep things tidy, I prefer to keep shader modifiers in their own text files (I named mine "dissolve.fragment.txt"). Here's how to load some modifier code and attach it to a material. let modifierURL = Bundle.main.url(forResource: "dissolve.fragment", withExtension: "txt")! let modifierString = try! String(contentsOf: modifierURL) material.shaderModifiers = [ SCNShaderModifierEntryPoint.fragment : modifierString ] And finally, to animate the effect, you can use a CABasicAnimation wrapped with a SCNAnimation: let revealAnimation = CABasicAnimation(keyPath: "revealage") revealAnimation.timingFunction = CAMediaTimingFunction(name: .linear) revealAnimation.duration = 2.5 revealAnimation.fromValue = 0.0 revealAnimation.toValue = 1.0 let scnRevealAnimation = SCNAnimation(caAnimation: revealAnimation) material.addAnimation(scnRevealAnimation, forKey: "Reveal")
Metal³
54,562,128
14
I'm having trouble rendering semitransparent sprites in Metal. I have read this question, and this question, and this one, and this thread on Apple's forums, and several more, but can't quite get it to work, so please read on before marking this question as a duplicate. My reference texture has four rows and four columns. The rows are fully-saturated red, green, blue and black, respectively. The columns vary in opacity from 100% opaque to 25% opaque (1, 0.75, 0.5, 0.25 alpha, in that order). On Pixelmator (where I created it), it looks like this: If I insert a fully opaque white background before exporting it, it will look like this: ...However, when I texture-map it onto a quad in Metal, and render that after clearing the background to opaque white (255, 255, 255, 255), I get this: ...which is clearly darker than it should be in the non-opaque fragments (the bright white behind should "bleed through"). Implementation Details I imported the png file into Xcode as a texture asset in my app's asset catalog, and at runtime, I load it using MTKTextureLoader. The .SRGB option doesn't seem to make a difference. The shader code is not doing anything fancy as far as I can tell, but for reference: #include <metal_stdlib> using namespace metal; struct Constants { float4x4 modelViewProjection; }; struct VertexIn { float4 position [[ attribute(0) ]]; float2 texCoords [[ attribute(1) ]]; }; struct VertexOut { float4 position [[position]]; float2 texCoords; }; vertex VertexOut sprite_vertex_transform(device VertexIn *vertices [[buffer(0)]], constant Constants &uniforms [[buffer(1)]], uint vertexId [[vertex_id]]) { float4 modelPosition = vertices[vertexId].position; VertexOut out; out.position = uniforms.modelViewProjection * modelPosition; out.texCoords = vertices[vertexId].texCoords; return out; } fragment float4 sprite_fragment_textured(VertexOut fragmentIn [[stage_in]], texture2d<float, access::sample> tex2d [[texture(0)]], constant Constants &uniforms [[buffer(1)]], sampler sampler2d [[sampler(0)]]) { float4 surfaceColor = tex2d.sample(sampler2d, fragmentIn.texCoords); return surfaceColor; } On the app side, I am using the following (pretty standard) blend factors and operations on my render pass descriptor: descriptor.colorAttachments[0].rgbBlendOperation = .add descriptor.colorAttachments[0].alphaBlendOperation = .add descriptor.colorAttachments[0].sourceRGBBlendFactor = .one descriptor.colorAttachments[0].sourceAlphaBlendFactor = .sourceAlpha descriptor.colorAttachments[0].destinationRGBBlendFactor = .oneMinusSourceAlpha descriptor.colorAttachments[0].destinationAlphaBlendFactor = .oneMinusSourceAlpha (I have tried changing the sourceRGBBlendFactor from .one to .sourceAlpha makes it a bit darker.) If I render the image on a red background (255, 0, 0, 255) instead, I get this: Notice how the top row gets gradually darker towards the right. It should be the same color all along since it is blending two colors that have the same RGB component (255, 0, 0). I have stripped my app to its bare minimum and put a demo project on Github; The full Metal setup can be seen in the repository's source code. Perhaps there's something I didn't mention that is causing this, but can't quite figure out what... Edit: As suggested by @KenThomases in the comments, I changed the value of the MTKView property colorPixelFormat from the default of .bgra8Unorm to bgra8Unorm_srgb, and set the colorSpace property to the same as view.window?.colorSpace?.cgColorSpace. Now, the semitransparent fragments look much less dark, but still not the expected color: (The top row should be completely 'invisible' against the red background, left to right.) Addendum I came up across Apple's docs on using the Shader Debugger, so I decided to take a look at what happens in the fragment shader when my app draws one of the top-right fragments of the sprite (which is suposed to be fully-saturated red at 25% opacity). Interestingly enough, the value returned from the fragment shader (to which alpha blending will be then applied, based on the color buffer's current color and the blend factors/functions) is [0.314, 0.0, 0.0, 0.596]: This RGBA value seems to be completely unaffected by whether MTKTextureLoader.Option.SRGB is true, false, or absent. Notice that the red component (0.314) and the alpha component (0.596) are not equal, although (if I'm not mistaken) they should be, for a fully-saturated red with premultiplied alpha. I guess this means I've narrowed my issue down to the texture loading stage...? Perhaps I should abandon the convenient MTKTextureLoader and get my hands dirty...?
Well, it turns out the problem was indeed in the texture loading stage, but not in any piece of code that I could possibly tweak (at least not if sticking to MTKTextureLoader). It seems that I needed to introduce some changes to the Attributes Inspector of my asset catalog in Xcode (But at least now I get to tag my original question with Xcode: One step closer to the bronze badge!). Specifically, I had to change the texture set's Interpretation attribute from the default option of "Colors" to "Colors (non-premultiplied)": Cleary, these asset catalog texture sets where designed with more traditional texture image formats in mind such as e.g. TGA, an not PNG (which is officially non-premultiplied, as per the specification). I somehow expected that MTKTextureLoader would be smart enough to do this for me at load time. Evidently, it is not a piece of information that can be reliably read from (e.g.) a PNG file's metadata/header. Now, my reference texture is rendered in all its bright glory: As a final, more rigorous test, I can confirm that all 4 colors "disappear" over an equivalent RGB background, regardless of the texels' opacities:
Metal³
55,604,226
14
In Metal what coordinate system to use inside shader (in and out)? And when we render to texture is it the same? With the z buffer also? Are there any inconsistencies? Finally what are the difference between Metal, OpenGL and DirectX ?
Metal Coordinate Systems Metal defines several standard coordinate systems to represent transformed graphics data at different stages along the rendering pipeline. 1) NDC (Normalized Device Coordinate): this coordinates is used by developers to construct their geometries and transform the geometries in vertex shader via model and view matrices. Point(-1, -1) in NDC is located at the the bottom left corner (Y up).. 2) Framebuffer Coordinate (Viewport coordinate): when we write into attachment or read from attachment or copy/blit between attachments, we use framebuffer coordiante to specify the location. The origin(0, 0) is located at the top-left corner (Y down). 3) Texture Coordinate: when we upload texture into memory or sample from texture, we use texture coordinate. The origin(0, 0) is located at the top-left corner (Y down). D3D12 and Metal NDC: +Y is up. Point(-1, -1) is at the bottom left corner. Framebuffer coordinate: +Y is down. Origin(0, 0) is at the top left corner. Texture coordinate: +Y is down. Origin(0, 0) is at the top left corner. OpenGL, OpenGL ES and WebGL NDC: +Y is up. Point(-1, -1) is at the bottom left corner. Framebuffer coordinate: +Y is up. Origin(0, 0) is at the bottom left corner. Texture coordinate: +Y is up. Origin(0, 0) is at the bottom left corner. Vulkan NDC: +Y is down. Point(-1, -1) is at the top left corner. Framebuffer coordinate: +Y is down. Origin(0, 0) is at the top left corner. Texture coordinate: +Y is up. Origin(0, 0) is at the bottom left corner.
Metal³
58,702,023
14
It has been a while since I have used XCode, but since Apple revieled XCode 6 and the new Metal API I had to check it out. They have released examples, e.g. a converted version of their Basic3D example. I am having problems making them compile and run, though. I had to add paths the Quatz and Metal frameworks for compilation and linking to work - thought that should have worked out of the box with build-in libraries? Also the metal shader compilation step fails, it can not find the metal compiler. Where in XCode is that path setup? If I ignore the shader compilation step I get an error that the application will not run on any of the simulators. Can metal based 3D applications only run on actual hardware and not in simulator mode?
According to a staff on Apple dev forums, Metal does not run in the simulator. See: https://devforums.apple.com/message/971605#971605 If you look at other samples/app templates there is no need to explicitly link any framework, indeed everything runs out of the box.
Metal³
24,046,125
13
Usually, I use the below code to identify the iOS version of the device. if ([[UIDevice currentDevice].systemVersion floatValue] >= 8.0) In a similar way, I'm trying to find Metal support for the device. Metal is supported for Apple devices with the A7 (or better) GPU and iOS 8.0. This is the way I expect my code to work: if (MetalSupported == true) { // metal programming } else { // opengles2 programming } How do I get the value for the Boolean variable MetalSupported ?
It's good that you're looking for something specific to Metal — generally, iOS version checks and hardware name checks are fragile, because they rely on your app knowing all of the OS versions and devices that could ever run it. If Apple were to go back and release an iOS 7.x version that added Metal support (okay, seems unlikely), or a device that supports Metal but isn't one of the hardware names you're looking at (seems much more likely), you'd be stuck having to track all of those things down and update your app to manage them. Anyway, the best way to check whether the device you're running on is Metal enough for your awesome graphics code? Just try to get a MTLDevice object: id<MTLDevice> device = MTLCreateSystemDefaultDevice(); if (device) { // ready to rock 🤘 } else { // back to OpenGL } Note that just testing for the presence of a Metal framework class doesn't help — those classes are there on any device running iOS 8 (all the way back to iPhone 4s & iPad 2), regardless of whether that device has a Metal-capable GPU. In Simulator, Metal is supported as of iOS 13 / tvOS 13 when running on macOS 10.15. Use the same strategy: call MTLCreateSystemDefaultDevice(). If it returns an object then your simulator code is running in an environment where the simulator is hardware-accelerated. If it returns nil then you're running on an older simulator or in an environment where Metal is not available.
Metal³
29,790,663
13
I have a MTLTexture containing 16bit unsigned integers (MTLPixelFormatR16Uint). The values range from about 7000 to 20000, with 0 being used as a 'nodata' value, which is why it is skipped in the code below. I'd like to find the minimum and maximum values so I can rescale these values between 0-255. Ultimately I'll be looking to base the minimum and maximum values on a histogram of the data (it has some outliers), but for now I'm stuck on simply extracting the min/max. I can read the data from the GPU to CPU and pull the min/max values out but would prefer to perform this task on the GPU. First attempt The command encoder is dispatched with 16x16 threads per thread group, the number of thread groups is based on the texture size (eg; width = textureWidth / 16, height = textureHeight / 16). typedef struct { atomic_uint min; atomic_uint max; } BandMinMax; kernel void minMax(texture2d<ushort, access::read> band1 [[texture(0)]], device BandMinMax &out [[buffer(0)]], uint2 gid [[thread_position_in_grid]]) { ushort value = band1.read(gid).r; if (value != 0) { uint currentMin = atomic_load_explicit(&out.min, memory_order_relaxed); uint currentMax = atomic_load_explicit(&out.max, memory_order_relaxed); if (value > currentMax) { atomic_store_explicit(&out.max, value, memory_order_relaxed); } if (value < currentMin) { atomic_store_explicit(&out.min, value, memory_order_relaxed); } } } From this I get a minimum and maximum value, but for the same dataset the min and max will often return different values. Fairly certain this is the min and max from a single thread when there are multiple threads running. Second attempt Building on the previous attempt, this time I'm storing the individual min/max values from each thread, all 256 (16x16). kernel void minMax(texture2d<ushort, access::read> band1 [[texture(0)]], device BandMinMax *out [[buffer(0)]], uint2 gid [[thread_position_in_grid]], uint tid [[ thread_index_in_threadgroup ]]) { ushort value = band1.read(gid).r; if (value != 0) { uint currentMin = atomic_load_explicit(&out[tid].min, memory_order_relaxed); uint currentMax = atomic_load_explicit(&out[tid].max, memory_order_relaxed); if (value > currentMax) { atomic_store_explicit(&out[tid].max, value, memory_order_relaxed); } if (value < currentMin) { atomic_store_explicit(&out[tid].min, value, memory_order_relaxed); } } } This returns an array containing 256 sets of min/max values. From these I guess I could find the lowest of the minimum values, but this seems like a poor approach. Would appreciate a pointer in the right direction, thanks!
The Metal Shading Language has atomic compare-and-swap functions you can use to compare the existing value at a memory location with a value, and replace the value at that location if they don't compare equal. With these, you can create a set of atomic compare-and-replace-if-[greater|less]-than operations: static void atomic_uint_exchange_if_less_than(volatile device atomic_uint *current, uint candidate) { uint val; do { val = *((device uint *)current); } while ((candidate < val || val == 0) && !atomic_compare_exchange_weak_explicit(current, &val, candidate, memory_order_relaxed, memory_order_relaxed)); } static void atomic_uint_exchange_if_greater_than(volatile device atomic_uint *current, uint candidate) { uint val; do { val = *((device uint *)current); } while (candidate > val && !atomic_compare_exchange_weak_explicit(current, &val, candidate, memory_order_relaxed, memory_order_relaxed)); } To apply these, you might create a buffer that contains one interleaved min, max pair per threadgroup. Then, in the kernel function, read from the texture and conditionally write the min and max values: kernel void min_max_per_threadgroup(texture2d<ushort, access::read> texture [[texture(0)]], device uint *mapBuffer [[buffer(0)]], uint2 tpig [[thread_position_in_grid]], uint2 tgpig [[threadgroup_position_in_grid]], uint2 tgpg [[threadgroups_per_grid]]) { ushort val = texture.read(tpig).r; device atomic_uint *atomicBuffer = (device atomic_uint *)mapBuffer; atomic_uint_exchange_if_less_than(atomicBuffer + ((tgpig[1] * tgpg[0] + tgpig[0]) * 2), val); atomic_uint_exchange_if_greater_than(atomicBuffer + ((tgpig[1] * tgpg[0] + tgpig[0]) * 2) + 1, val); } Finally, run a separate kernel to reduce over this buffer and collect the final min, max values across the entire texture: kernel void min_max_reduce(constant uint *mapBuffer [[buffer(0)]], device uint *reduceBuffer [[buffer(1)]], uint2 tpig [[thread_position_in_grid]]) { uint minv = mapBuffer[tpig[0] * 2]; uint maxv = mapBuffer[tpig[0] * 2 + 1]; device atomic_uint *atomicBuffer = (device atomic_uint *)reduceBuffer; atomic_uint_exchange_if_less_than(atomicBuffer, minv); atomic_uint_exchange_if_greater_than(atomicBuffer + 1, maxv); } Of course, you can only reduce over the total allowed thread execution width of the device (~256), so you may need to do the reduction in multiple passes, with each one reducing the size of the data to be operated on by a factor of the maximum thread execution width. Disclaimer: This may not be the best technique, but it does appear to be correct in my limited testing of an OS X implementation. It was marginally faster than a naive CPU implementation on a 256x256 texture on Intel Iris Pro, but substantially slower on an Nvidia GT 750M (because of dispatch overhead).
Metal³
36,663,645
13
In Metal what is the difference between a packed_float4 and a float4?
This information is from here float4 has an alignment of 16 bytes. This means that the memory address of such a type (e.g. 0x12345670) will be divisible by 16 (aka the last hexadecimal digit is 0). packed_float4 on the other hand has an alignment of 4 bytes. Last digit of the address will be 0, 4, 8 or c This does matter when you create custom structs. Say you want a struct with 2 normal floats and 1 float4/packed_float4: struct A{ float x, y; float4 z; } struct B{ float x, y; packed_float4 z; } For A: The alignment of float4 has to be 16 and since float4 has to be after the normal floats, there is going to be 8 bytes of empty space between y and z. Here is what A looks like in memory: Address | 0x200 | 0x204 | 0x208 | 0x20c | 0x210 | 0x214 | 0x218 | 0x21c | Content | x | y | - | - | z1 | z2 | z3 | z4 | ^Has to be 16 byte aligned For B: Alignment of packed_float4 is 4, the same as float, so it can follow right after the floats in any case: Address | 0x200 | 0x204 | 0x208 | 0x20c | 0x210 | 0x214 | Content | x | y | z1 | z2 | z3 | z4 | As you can see, A takes up 32 bytes whereas B only uses 24 bytes. When you have an array of those structs, A will take up 8 more bytes for every element. So for passing around a lot of data, the latter is preferred. The reason you need float4 at all is because the GPU can't handle 4 byte aligned packed_float4s, you won't be able to return packed_float4 in a shader. This is because of performance I assume. One last thing: When you declare the Swift version of a struct: struct S { let x, y: Float let z : (Float, Float, Float, Float) } This struct will be equal to B in Metal and not A. A tuple is like a packed_floatN. All of this also applies to other vector types such as packed_float3, packed_short2, ect.
Metal³
38,773,807
13
Is it possible to import or include metal file into another metal file? Say I have a metal file with all the math functions and I will only include or import it if it is needed in my metal project. Is it possible? I tried: #include "sdf.metal" and I got error: metallib: Multiply defined symbols _Z4vmaxDv2_f Command/Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/usr/bin/metallib failed with exit code 1 Update: Here are both my shader files: SDF.metal: #ifndef MYAPP_METAL_CONSTANTS #define MYAPP_METAL_CONSTANTS #include <metal_stdlib> namespace metal { float kk(float2 v) { return max(v.x, v.y); } float kkk(float3 v) { return max(max(v.x, v.y), v.z); } } #endif And Shaders.metal: #include <metal_stdlib> #include "SDF.metal" using namespace metal; float fBoxCheap(float3 p, float3 b) { //cheap box return kkk(abs(p) - b); } float map( float3 p ) { float box2 = fBoxCheap(p-float3(0.0,3.0,0.0),float3(4.0,3.0,1.0)); return box2; } float3 getNormal( float3 p ) { float3 e = float3( 0.001, 0.00, 0.00 ); float deltaX = map( p + e.xyy ) - map( p - e.xyy ); float deltaY = map( p + e.yxy ) - map( p - e.yxy ); float deltaZ = map( p + e.yyx ) - map( p - e.yyx ); return normalize( float3( deltaX, deltaY, deltaZ ) ); } float trace( float3 origin, float3 direction, thread float3 &p ) { float totalDistanceTraveled = 0.0; for( int i=0; i <64; ++i) { p = origin + direction * totalDistanceTraveled; float distanceFromPointOnRayToClosestObjectInScene = map( p ); totalDistanceTraveled += distanceFromPointOnRayToClosestObjectInScene; if( distanceFromPointOnRayToClosestObjectInScene < 0.0001 ) { break; } if( totalDistanceTraveled > 10000.0 ) { totalDistanceTraveled = 0.0000; break; } } return totalDistanceTraveled; } float3 calculateLighting(float3 pointOnSurface, float3 surfaceNormal, float3 lightPosition, float3 cameraPosition) { float3 fromPointToLight = normalize(lightPosition - pointOnSurface); float diffuseStrength = clamp( dot( surfaceNormal, fromPointToLight ), 0.0, 1.0 ); float3 diffuseColor = diffuseStrength * float3( 1.0, 0.0, 0.0 ); float3 reflectedLightVector = normalize( reflect( -fromPointToLight, surfaceNormal ) ); float3 fromPointToCamera = normalize( cameraPosition - pointOnSurface ); float specularStrength = pow( clamp( dot(reflectedLightVector, fromPointToCamera), 0.0, 1.0 ), 10.0 ); // Ensure that there is no specular lighting when there is no diffuse lighting. specularStrength = min( diffuseStrength, specularStrength ); float3 specularColor = specularStrength * float3( 1.0 ); float3 finalColor = diffuseColor + specularColor; return finalColor; } kernel void compute(texture2d<float, access::write> output [[texture(0)]], constant float &timer [[buffer(1)]], constant float &mousex [[buffer(2)]], constant float &mousey [[buffer(3)]], uint2 gid [[thread_position_in_grid]]) { int width = output.get_width(); int height = output.get_height(); float2 uv = float2(gid) / float2(width, height); uv = uv * 2.0 - 1.0; // scale proportionately. if(width > height) uv.x *= float(width)/float(height); if(width < height) uv.y *= float(height)/float(width); float posx = mousex * 2.0 - 1.0; float posy = mousey * 2.0 - 1.0; float3 cameraPosition = float3( posx * 0.01,posy * 0.01, -10.0 ); float3 cameraDirection = normalize( float3( uv.x, uv.y, 1.0) ); float3 pointOnSurface; float distanceToClosestPointInScene = trace( cameraPosition, cameraDirection, pointOnSurface ); float3 finalColor = float3(1.0); if( distanceToClosestPointInScene > 0.0 ) { float3 lightPosition = float3( 5.0, 2.0, -10.0 ); float3 surfaceNormal = getNormal( pointOnSurface ); finalColor = calculateLighting( pointOnSurface, surfaceNormal, lightPosition, cameraPosition ); } output.write(float4(float3(finalColor), 1), gid); } Update2: and my MetalView.swift: import MetalKit public class MetalView: MTKView, NSWindowDelegate { var queue: MTLCommandQueue! = nil var cps: MTLComputePipelineState! = nil var timer: Float = 0 var timerBuffer: MTLBuffer! var mousexBuffer: MTLBuffer! var mouseyBuffer: MTLBuffer! var pos: NSPoint! var floatx: Float! var floaty: Float! required public init(coder: NSCoder) { super.init(coder: coder) self.framebufferOnly = false device = MTLCreateSystemDefaultDevice() registerShaders() } override public func drawRect(dirtyRect: NSRect) { super.drawRect(dirtyRect) if let drawable = currentDrawable { let command_buffer = queue.commandBuffer() let command_encoder = command_buffer.computeCommandEncoder() command_encoder.setComputePipelineState(cps) command_encoder.setTexture(drawable.texture, atIndex: 0) command_encoder.setBuffer(timerBuffer, offset: 0, atIndex: 1) command_encoder.setBuffer(mousexBuffer, offset: 0, atIndex: 2) command_encoder.setBuffer(mouseyBuffer, offset: 0, atIndex: 3) update() let threadGroupCount = MTLSizeMake(8, 8, 1) let threadGroups = MTLSizeMake(drawable.texture.width / threadGroupCount.width, drawable.texture.height / threadGroupCount.height, 1) command_encoder.dispatchThreadgroups(threadGroups, threadsPerThreadgroup: threadGroupCount) command_encoder.endEncoding() command_buffer.presentDrawable(drawable) command_buffer.commit() } } func registerShaders() { queue = device!.newCommandQueue() do { let library = device!.newDefaultLibrary()! let kernel = library.newFunctionWithName("compute")! timerBuffer = device!.newBufferWithLength(sizeof(Float), options: []) mousexBuffer = device!.newBufferWithLength(sizeof(Float), options: []) mouseyBuffer = device!.newBufferWithLength(sizeof(Float), options: []) cps = try device!.newComputePipelineStateWithFunction(kernel) } catch let e { Swift.print("\(e)") } } func update() { timer += 0.01 var bufferPointer = timerBuffer.contents() memcpy(bufferPointer, &timer, sizeof(Float)) bufferPointer = mousexBuffer.contents() memcpy(bufferPointer, &floatx, sizeof(NSPoint)) bufferPointer = mouseyBuffer.contents() memcpy(bufferPointer, &floaty, sizeof(NSPoint)) } override public func mouseDragged(event: NSEvent) { pos = convertPointToLayer(convertPoint(event.locationInWindow, fromView: nil)) let scale = layer!.contentsScale pos.x *= scale pos.y *= scale floatx = Float(pos.x) floaty = Float(pos.y) debugPrint("Hello",pos.x,pos.y) } } Update 3 After implement as per KickimusButticus's solution, the shader did compile. However I have another error:
Your setup is incorrect (EDIT: And so was my setup in my other answer and the previous version of this answer.) You can use a header just like in C++ (Metal is based on C++11, after all...). All you need one is more file, I'll call it SDF.h. The file includes function prototype declarations without a namespace declaration. And you need to #include it after the using namespace metal; declaration in your other files. Make sure the header file is not a .metal file and that it is not in the Compile Sources list in your Build Phases. If the header is being treated as a compiled source, that's most likely what's causing the CompilerError. SDF.h: // SDFHeaders.metal #ifndef SDF_HEADERS #define SDF_HEADERS float kk(float2 v); float kkk(float3 v); #endif SDF.metal: #include <metal_stdlib> using namespace metal; #include "SDF.h" float kk(float2 v) { return max(v.x, v.y); } float kkk(float3 v) { return max(max(v.x, v.y), v.z); } Shaders.metal: Here is where you use the functions after including SDF.h. // Shaders.metal #include <metal_stdlib> using namespace metal; #include "SDF.h" float fBoxCheap(float3 p, float3 b) { //cheap box return kkk(abs(p) - b); } // ... And of course, build after cleaning. Good luck!
Metal³
39,283,565
13
I got this passthrough vertex shader I used from Apple's sample code: vertex VertexIO vertexPassThrough(device packed_float4 *pPosition [[ buffer(0) ]], device packed_float2 *pTexCoords [[ buffer(1) ]], uint vid [[ vertex_id ]]) { VertexIO outVertex; outVertex.position = pPosition[vid]; outVertex.textureCoord = pTexCoords[vid]; return outVertex; } This worked in Swift 4/Xcode 10/iOS 12. Now I with Swift 5/Xcode 11/iOS 13, I get this warning: writable resources in non-void vertex function
You need to ensure the shader can only read from those buffers, so you need to change the declaration to const device: vertex VertexIO vertexPassThrough(const device packed_float4 *pPosition [[ buffer(0) ]], const device packed_float2 *pTexCoords [[ buffer(1) ]], uint vid [[ vertex_id ]]) { ... }
Metal³
57,692,571
13
I want to pass a float to my metal shader. I cannot figure out how. Here is my shader: vertex float4 model_vertex(unsigned int iid[[instance_id]] constant float angle) { float number = float(iid) / 64.0; return float4(number * sin(angle), number * cos(angle), 0.0, 1.0); } Now I want to pass it to the shader: let renderPassDescriptor = MTLRenderPassDescriptor() let renderEncoder = commandBuffer.renderCommandEncoderWithDescriptor(renderPassDescriptor) // ... let angle: Float = 0.5 renderEncoder.setUniform1(angle) // What do I do here? How do I pass the single float value?
Also in 10.11+ and iOS 9+ you can use: public func setVertexBytes(bytes: UnsafePointer<Void>, length: Int, atIndex index: Int) Which is documented to be better than creating a MTLBuffer if you're only using the buffer once (and your data is less than 4K long).
Metal³
26,298,840
12
This is an absolute beginner question. Background: I’m not really a game developer, but I’m trying to learn the basics of low-level 3D programming, because it’s a fun and interesting topic. I’ve picked Apple’s Metal as the graphics framework. I know about SceneKit and other higher level frameworks, but I’m intentionally trying to learn the low level bits. Unfortunately I’m way out of my depth, and there seems to be very little beginner-oriented Metal resources on the web. By reading the Apple documentation and following the tutorials I could find, I’ve managed to implement a simple vertex shader and a fragment shader and draw an actual 3D model on the screen. Now I’m trying to draw a second a model, but I’m kind of stuck, because I’m not really sure what’s really the best way to go about it. Do I… Use a single vertex buffer and index buffer for all of my models, and tell the MTLRenderCommandEncoder the offsets when rendering the individual models? Have a separate vertex buffer / index buffer for each model? Would such an approach scale? Something else? TL;DR: What is the recommended way to store the vertex data of multiple models in Metal (or any other 3D framework)?
There is no one recommended way. When you're working at such a low level as Metal, there are many possibilities, and the one you pick depends heavily on the situation and what performance characteristics you want/need to optimize for. If you're just playing around with intro projects, most of these decisions are irrelevant, because the performance issues won't bite until you scale up to a "real" project. Typically, game engines use one buffer (or set of vertex/index buffers) per model, especially if each model requires different render states (e.g. shaders, bound textures). This means that when new models are introduced to the scene or old ones no longer needed, the requisite resources can be loaded into / removed from GPU memory (by way of creating / destroying MTL objects). The main use case for doing multiple draws out of (different parts of) the same buffer is when you're mutating the buffer. For example, on frame n you're using the first 1KB of a buffer to draw with, while at the same time you're computing / streaming in new vertex data and writing it to the second 1KB of the buffer... then for frame n + 1 you switch which parts of the buffer are being used for what.
Metal³
34,485,259
12
I am trying to compute sum of large array in parallel with metal swift. Is there a god way to do it? My plane was that I divide my array to sub arrays, compute sum of one sub arrays in parallel and then when parallel computation is finished compute sum of sub sums. for example if I have array = [a0,....an] I divide array in sub arrays : array_1 = [a_0,...a_i], array_2 = [a_i+1,...a_2i], .... array_n/i = [a_n-1, ... a_n] sums for this arrays is computed in parallel and I get sum_1, sum_2, sum_3, ... sum_n/1 at the end just compute sum of sub sums. I create application which run my metal shader, but some things I don't understand quite. var array:[[Float]] = [[1,2,3], [4,5,6], [7,8,9]] // get device let device: MTLDevice! = MTLCreateSystemDefaultDevice() // get library let defaultLibrary:MTLLibrary! = device.newDefaultLibrary() // queue let commandQueue:MTLCommandQueue! = device.newCommandQueue() // function let kernerFunction: MTLFunction! = defaultLibrary.newFunctionWithName("calculateSum") // pipeline with function let pipelineState: MTLComputePipelineState! = try device.newComputePipelineStateWithFunction(kernerFunction) // buffer for function let commandBuffer:MTLCommandBuffer! = commandQueue.commandBuffer() // encode function let commandEncoder:MTLComputeCommandEncoder = commandBuffer.computeCommandEncoder() // add function to encode commandEncoder.setComputePipelineState(pipelineState) // options let resourceOption = MTLResourceOptions() let arrayBiteLength = array.count * array[0].count * sizeofValue(array[0][0]) let arrayBuffer = device.newBufferWithBytes(&array, length: arrayBiteLength, options: resourceOption) commandEncoder.setBuffer(arrayBuffer, offset: 0, atIndex: 0) var result:[Float] = [0,0,0] let resultBiteLenght = sizeofValue(result[0]) let resultBuffer = device.newBufferWithBytes(&result, length: resultBiteLenght, options: resourceOption) commandEncoder.setBuffer(resultBuffer, offset: 0, atIndex: 1) let threadGroupSize = MTLSize(width: 1, height: 1, depth: 1) let threadGroups = MTLSize(width: (array.count), height: 1, depth: 1) commandEncoder.dispatchThreadgroups(threadGroups, threadsPerThreadgroup: threadGroupSize) commandEncoder.endEncoding() commandBuffer.commit() commandBuffer.waitUntilCompleted() let data = NSData(bytesNoCopy: resultBuffer.contents(), length: sizeof(Float), freeWhenDone: false) data.getBytes(&result, length: result.count * sizeof(Float)) print(result) is my Swift code, my shader is : kernel void calculateSum(const device float *inFloat [[buffer(0)]], device float *result [[buffer(1)]], uint id [[ thread_position_in_grid ]]) { float * f = inFloat[id]; float sum = 0; for (int i = 0 ; i < 3 ; ++i) { sum = sum + f[i]; } result = sum; } I don't know how to defined that inFloat is array of array. I don't know exactly what is threadGroupSize and threadGroups. I don't know what is device and uint in shader properties. Is this right approach?
I took the time to create a fully working example of this problem with Metal. The explanation is in the comments: let count = 10_000_000 let elementsPerSum = 10_000 // Data type, has to be the same as in the shader typealias DataType = CInt let device = MTLCreateSystemDefaultDevice()! let library = self.library(device: device) let parsum = library.makeFunction(name: "parsum")! let pipeline = try! device.makeComputePipelineState(function: parsum) // Our data, randomly generated: var data = (0..<count).map{ _ in DataType(arc4random_uniform(100)) } var dataCount = CUnsignedInt(count) var elementsPerSumC = CUnsignedInt(elementsPerSum) // Number of individual results = count / elementsPerSum (rounded up): let resultsCount = (count + elementsPerSum - 1) / elementsPerSum // Our data in a buffer (copied): let dataBuffer = device.makeBuffer(bytes: &data, length: MemoryLayout<DataType>.stride * count, options: [])! // A buffer for individual results (zero initialized) let resultsBuffer = device.makeBuffer(length: MemoryLayout<DataType>.stride * resultsCount, options: [])! // Our results in convenient form to compute the actual result later: let pointer = resultsBuffer.contents().bindMemory(to: DataType.self, capacity: resultsCount) let results = UnsafeBufferPointer<DataType>(start: pointer, count: resultsCount) let queue = device.makeCommandQueue()! let cmds = queue.makeCommandBuffer()! let encoder = cmds.makeComputeCommandEncoder()! encoder.setComputePipelineState(pipeline) encoder.setBuffer(dataBuffer, offset: 0, index: 0) encoder.setBytes(&dataCount, length: MemoryLayout<CUnsignedInt>.size, index: 1) encoder.setBuffer(resultsBuffer, offset: 0, index: 2) encoder.setBytes(&elementsPerSumC, length: MemoryLayout<CUnsignedInt>.size, index: 3) // We have to calculate the sum `resultCount` times => amount of threadgroups is `resultsCount` / `threadExecutionWidth` (rounded up) because each threadgroup will process `threadExecutionWidth` threads let threadgroupsPerGrid = MTLSize(width: (resultsCount + pipeline.threadExecutionWidth - 1) / pipeline.threadExecutionWidth, height: 1, depth: 1) // Here we set that each threadgroup should process `threadExecutionWidth` threads, the only important thing for performance is that this number is a multiple of `threadExecutionWidth` (here 1 times) let threadsPerThreadgroup = MTLSize(width: pipeline.threadExecutionWidth, height: 1, depth: 1) encoder.dispatchThreadgroups(threadgroupsPerGrid, threadsPerThreadgroup: threadsPerThreadgroup) encoder.endEncoding() var start, end : UInt64 var result : DataType = 0 start = mach_absolute_time() cmds.commit() cmds.waitUntilCompleted() for elem in results { result += elem } end = mach_absolute_time() print("Metal result: \(result), time: \(Double(end - start) / Double(NSEC_PER_SEC))") result = 0 start = mach_absolute_time() data.withUnsafeBufferPointer { buffer in for elem in buffer { result += elem } } end = mach_absolute_time() print("CPU result: \(result), time: \(Double(end - start) / Double(NSEC_PER_SEC))") I used my Mac to test it, but it should work just fine on iOS. Output: Metal result: 494936505, time: 0.024611456 CPU result: 494936505, time: 0.163341018 The Metal version is about 7 times faster. I'm sure you can get more speed if you implement something like divide-and-conquer with cutoff or whatever.
Metal³
38,164,634
12
I have a Metal texture, I want to access its data from Swift by making it a float4 array (so that I can access each pixel 4 color components). I discovered this method of MTLTexture : getBytes(UnsafeMutableRawPointer, bytesPerRow: Int, bytesPerImage: Int, from: MTLRegion, mipmapLevel: Int, slice: Int) I don't know at all how to use UnsafeMutableRawPointer, how it works, and how to get the data back into a simple Swift array. My first attempt was to create a pointer and allocate enough space like that, but I don't even know if that's what I should do: var pointer = UnsafeMutableRawPointer.allocate(bytes: myTextureSizeInBytes, alignedTo: 0) Then I have no idea at all of how to get this data back into a standard Swift array... Thank you.
First, let's assume you have a UnsafeRawPointer and a length: let ptr: UnsafeRawPointer = ... let length: Int = ... Now you want to convert that to an [float4]. First, you can convert your UnsafeRawPointer to a typed pointer by binding it to a type: let float4Ptr = ptr.bindMemory(to: float4.self, capacity: length) Now you can convert that to a typed buffer pointer: let float4Buffer = UnsafeBufferPointer(start: float4Ptr, count: length) And since a buffer is a collection, you can initialize an array with it: let output = Array(float4Buffer) For much more on working with UnsafeRawPointer, see SE-0138, SE-0107, and the UnsafeRawPointer Migration Guide.
Metal³
41,574,498
12
Whenever I build a project that includes a metal shader to an x86_64 target (iOS simulator), I get a dependency analysis warning: warning: no rule to process file '[File Path]/Shaders.metal' of type sourcecode.metal for architecture x86_64 I know this isn't a huge issue but I like to keep my projects free from warnings when I build, so that when a real issue does arise, I actually notice the yellow warning triangle. Any quick way to get Xcode to ignore metal files for simulator targets?
You can resolve this by precompiling your .metal file into a Metal library during the build step, and removing the .metal source code from your app target. Remove .metal file from target Select your .metal file in the project navigator, and uncheck the target that is giving you the warning. Metal library compile script Create a bash script called CompileMetalLib.sh in your project, alongside your .metal file, with contents like this: xcrun -sdk iphoneos metal -c MyShader.metal -o MyShader.air xcrun -sdk iphoneos metallib MyShader.air -o MyShader.metallib rm MyShader.air Make sure to give it executable permissions by running chmod +x CompileMetalLib.sh. MyShader.air is the intermediate compile step, and MyShader.metallib is the fully compiled metal library. Read all about compiling a Metal file here If you're compiling for OS X, change iphoneos to macosx. Run compile script during build Now you'll want to trigger this script in your Build Phases. Add a New Run Script Phase to your target. The contents should look like this: cd ${SRCROOT}/path/to/folder/containing/yourshader ./CompileMetalLib.sh It's important to drag this step so that it happens before the Copy Bundle Resources step. Change your code to use your compiled Metal library You'll now load the compiled Metal library from your app bundle. Here's some pseudo-Swift: let metalLibPath = Bundle.main.path(forResource: "MyShader", ofType: "metallib") let myLibrary = try metalDevice.makeLibrary(filepath: metalLibPath) Result You are manually compiling your .metal file with an external script, and copying the compiled library into your bundle resources. Your code loads this library. Now that you don't have any .metal files in your target, the simulator will no longer throw a warning about not being able to compile for x86_64.
Metal³
46,254,475
12
I am going through the Metal iOS Swift example trying to understand the triple buffering practice they suggest. This is shown inside of the demo for the uniform animations. As I understand it aligned memory simply starts at a specefic increment that is a multiple of some byte amount that the device really likes. My confusion is this line of code // The 256 byte aligned size of our uniform structure let alignedUniformsSize = (MemoryLayout<Uniforms>.size & ~0xFF) + 0x100 they use it to find the size and byte of the Uniforms struct. I am confused about why there are binary operations here I am really not sure what they do. If it helps this aligned size is used to create a buffer like this. I am fairly sure that buffer allocates byte aligned memory automatically and is henceforth used as the memory storage location for the uniforms. let buffer = self.device.makeBuffer(length:alignedUniformsSize * 3, options:[MTLResourceOptions.storageModeShared]) So essentially rather than going through the trouble of allocating byte aligned memory by yourself they let metal do it for them. Is there any reason that the strategy they used when they did let allignedUniformsSize = would not work for other types such as Int or Float etc?
Let's talk first about why you'd want aligned buffers, then we can talk about the bitwise arithmetic. Our goal is to allocate a Metal buffer that can store three (triple-buffered) copies of our uniforms (so that we can write to one part of the buffer while the GPU reads from another). In order to read from each of these three copies, we supply an offset when binding the buffer, something like currentBufferIndex * uniformsSize. Certain Metal devices require these offsets to be multiples of 256, so we instead need to use something like currentBufferIndex * alignedUniformsSize as our offset. How do we "round up" an integer to the next highest multiple of 256? We can do it by dropping the lowest 8 bits of the "unaligned" size, effectively rounding down, then adding 256, which gets us the next highest multiple. The rounding down part is achieved by bitwise ANDing with the 1's complement (~) of 255, which (in 32-bit) is 0xFFFFFF00. The rounding up is done by just adding 0x100, which is 256. Interestingly, if the base size is already aligned, this technique spuriously rounds up anyway (e.g., from 256 to 512). For the cost of an integer divide, you can avoid this waste: let alignedUniformsSize = ((MemoryLayout<Uniforms>.size + 255) / 256) * 256
Metal³
46,431,114
12
I have a Metal fragment shader that returns some transparent colors with an alpha channel and I'd like to reveal a UIView under the MTKView, but they only background result I get is black and "error noise". MTLRenderPipelineDescriptor: pipelineStateDescriptor.isAlphaToCoverageEnabled = true pipelineStateDescriptor.colorAttachments[0].pixelFormat = .bgra8Unorm pipelineStateDescriptor.colorAttachments[0].isBlendingEnabled = true pipelineStateDescriptor.colorAttachments[0].destinationRGBBlendFactor = .oneMinusSourceAlpha pipelineStateDescriptor.colorAttachments[0].destinationAlphaBlendFactor = .oneMinusSourceAlpha MTLRenderPassDescriptor: renderPassDescriptor.colorAttachments[0].loadAction = .clear renderPassDescriptor.colorAttachments[0].clearColor = MTLClearColor(red: 0, green: 0, blue: 0, alpha: 0) If I change the clear color I can see it under the transparent colors, tho if I skip the clear color I see "error noise". Does the clear color alpha channel actually do anything? Does anyone know how to make a MTKView transparent? Update: Here's the magical property to make a MTKView transparent: self.isOpaque = false
If a UIView may have transparent content or otherwise fail to fill itself with opaque drawing, then it should set its opaque (isOpaque) property to false so that it will be properly composited with whatever is behind it. Since, MTKView is a subclass of UIView, this applies to it as well.
Metal³
47,643,974
12

StackOverflow Q&A Dataset for Various Projects

Description

This dataset consists of Q&A data extracted from StackOverflow, related to different projects of CNCF (Cloud Native Computing Foundation) landscape. It includes the following three columns:

  1. Question: The question asked on StackOverflow.
  2. Answer: The corresponding answer to the question.
  3. Tag: The name of the project to which the question and answer are related.

The data was collected using the Git Exchange API to extract Q&A pairs from StackOverflow based on different projects of CNCF (Cloud Native Computing Foundation) landscape. This process involved using project-specific tags to scrape StackOverflow for questions and their accepted answers, then categorizing them under the relevant project name (tag).

License

This dataset is available under the MIT license.

Links

Downloads last month
49
Edit dataset card