copy cd to folder windows 10
How to play an MKV video with OGG audio?
There are two mkv files, A containing ogg audio and B containing aac audio.
B can be played in any way.
1) With MF/Playback(CreateVideoRendererActivate), A cannot be played.
2) You can play A by MFCreateMediaPlayer.
I don't want to do it the MFCreateMediaPlayer way, because I have to transcode, etc.
I have installed ogg's decoder in the Windows Store, and the Windows built-in "TV & Video" app can also play A.
What do I need to do to get the OGG decoder to play A correctly in my program?
Microsoft Media Foundation App not running through Remote Desktop after August 2019 on Windows 10
Good day
I have developed a web camera app using media foundation. The app works fine, but somewhere after the Windows 10 of August 2019 release, I cannot run the app using Remote desktop. It still run if you are on the PC.
It consist of a webcam input,Transform and a standard sink. But using RDP the app just crashes deep inside the code.
Due to the nature of the app I do need to RDP to it.
I current trun it on a Windows 10 August 2019 edition and have blocked all updates.
Is there any reason why this happens with newer releases of Windows. I do not know the exact last version that it will work on.
How should I solve it or is it Microsoft that should solve it?
gRPC
static async Task Main(string[] args) { var httpClient = new HttpClient(); httpClient.BaseAddress = new Uri("https://localhost:50051"); var client = GrpcClient.Create<Greeter.GreeterClient>(httpClient); var reply = await client.SayHelloAsync( new HelloRequest { Name = "GreeterClient" }); Console.WriteLine("Greeting: " + reply.Message); Console.WriteLine("Press any key to exit..."); Console.ReadKey(); }
andrew
Handling Image data from IMFSourceReader and IMFSample
I am attempting to use the IMFSourceReader to read and decode a .mp4 file. I have configured the source reader to decode to MFVideoFormat_NV12 by setting a partial media type and calling IMFSourceReader::SetCurrentMediaTypeand loaded a video with dimensions of 1266x544.
While processing I receive the MF_SOURCE_READERF_CURRENTMEDIATYPECHANGED flag with a new dimension of 1280x544 and a MF_MT_MINIMUM_DISPLAY_APERTURE of 1266x544.
I believe the expectation is to then use either the video resizer dsp or video processor mft. However it is my understanding that the video processor mft requires windows 8.1 while I am on windows 7, and the video resizer dsp does not support MFVideoFormat_NV12.
What is the correct way to crop out the extra data added by the source reader to display only the data within the minimum display aperture for MFVideoFormat_NV12?
MediaTranscoding sample fails when target is 2160p60
I wanted to cross-post this issue here because I think the problem is Media Foundation related.
https://github.com/microsoft/Windows-universal-samples/issues/1200
The MediaTranscoding sample shows a red "unknown failure" error when I try to transcode to UHD 2160p60 on an Intel machine without Quick Sync Video and with an Nvidia GPU. UHD 2160p60 transcoding succeeds on Intel hardware with Quick Sync Video.
Repro steps:
- Build the MediaTranscoding sample (I built the C# version).
- Run the sample on a machine with an Intel CPU without Quick Sync Video. I used a Core i7-5960X with an Nvidia GeForce GTX 980 Ti.
- Select scenario #2 (Transcode with custom settings).
- Select any source video file.
- Enter Width=3840, Height=2160, Frame Rate=60
- Select an output file.
- Click the Transcode button.
You'll get a red "unknown failure" error.
If you change the target frame rate from 60 to 30, this scenario will succeed.
If you run this scenario on a Surface Pro it will succeed at both 60 and 30 frames per second. I used a Surface Pro LTE from mid 2018 that has an Intel Core i5-7300U with Quick Sync Video. I've also tried several other Intel processors with Quick Sync Video and they also work.
Perhaps this is some kind of platform bug. Or perhaps it's some kind of Intel and/or Nvidia driver bug. For what it's worth, I installed the latest driver from Nvidia (445.87). I am running Windows 10.0.18363.815.
I captured a trace with WPR and it showed one thing that might be a clue:
EventName: Microsoft-Windows-Runtime-Media/WinRTTranscode_PrepareTranscode_Task/Stop
Payload: ErrorCode="-1,072,861,856" FormattedMessage="PrepareTranscode operation 0x29ed6283fe0{Object} ended. -1,072,861,856{ErrorCode} "
That error code happens to be MF_E_TRANSFORM_TYPE_NOT_SET. The Media Foundation doc page for the H.264 encoder says this:
The output type must be set before the input type. Until the output type is set, the encoder's IMFTransform::SetInputType method returns MF_E_TRANSFORM_TYPE_NOT_SET.
So maybe there's a bug in how the OS sets up the graph when transcoding to 59.94 fps.
Forcing MFT output to use specific type of buffers
I have a custom MFT Decoder and a custom MFT Encoder.
Decoding is done using SW and encoding is done using GPU
I would like the SW decoder to directly decode into a GPU allocated buffer
Is there any way to tell ProcessOutput to use specific type of buffers? In DShow it was possible to set an allocator on the output pin, is there any way to do something similar in Media Foundation ?
Nadav Rubinstein, See my Blog @ http://nadavrub.wordpress.com
how to know if CLSID_CColorConvertDMO supports hardware acceleration
So i created CLSID_CColorConvertDMO using:
IMediaObject pMediaObject;
pMediaObject.CoCreateInstance(CLSID_CColorConvertDMO);
Now I want to check if it will do that using Hardware (GPU) or not. If it will not do that using GPU then i do not want to use it. I read about MF_SA_D3D11_AWARE and on MFT_ENUM_HARDWARE_URL_Attribute They should tell if it is Hardware accelerated supported. But to check that I need access to IMFAttributes. So I tried this:
IMFTransform* oIMFTransform = NULL;IMFAttributes* pAttributes = NULL;
HRESULT hr = pMediaObject->QueryInterface(IID_IMFTransform,(void**)&oIMFTransform);
hr = oIMFTransform->GetAttributes(0,&pAttributes);if(SUCCEEDED(hr)){
UINT32 bD3DAware =MFGetAttributeUINT32(pAttributes, MF_SA_D3D_AWARE, FALSE);
bD3DAware++;
pAttributes->Release();}
But hr that came from hr = oIMFTransform->GetAttributes(&pAttributes); is always E_NOTIMPL So how i can tell if on this PC it will do the color conversion using Hardware or not?
Thanks!
WMP Mute when UiMode is None
I have a problem when trying to mute sound from my Playlist when in UiMode "none".
The code
CWMPPlaylistCollection playlistCollection = m_player.GetPlaylistCollection(); CWMPPlaylist playList = playlistCollection.newPlaylist(_T("MyList")); playList.appendItem(m_player.newMedia(_T("V:\\Videos\\1.mp4"))); playList.appendItem(m_player.newMedia(_T("V:\\Videos\\2.mp4"))); playList.appendItem(m_player.newMedia(_T("V:\\Videos\\3.mp4"))); m_player.SetUiMode(_T("none")); m_player.SetWindowlessVideo(TRUE); m_player.SetStretchToFit(TRUE); m_player.SetEnableContextMenu(FALSE); BOOL bIsMute = m_player.GetSettings().GetIsAvailable(_T("Mute")); if (bIsMute) m_player.GetSettings().SetMute(TRUE); m_player.GetControls().play();
works well for the first video, but the sound is turned on when the second video automatically starts?
I have noticed, that if I select UiMode "full" or "mini" it works, then I can also see the mute button?
Any ideas what is wrong?
-cpede
Output video bitrate is unexpected when convert video using MFT
When we convert a fast motion video with some parameters as following:
- video codec: H264
- video frame rate: 30 frame/s
- video bitrate: 128000 bps
The output video bitrate is un expected (128kbps)
We have implemented fast motion video encode using CODECAPI_AVEncCommonRateControlMode with two values eAVEncCommonRateControlMode_CBR and eAVEncCommonRateControlMode_PeakConstrainedVBR. However, the video output still does not receive the expected bitrate setting.
Video input: https://drive.google.com/drive/folders/1MJrv5VOKqbEvWHAOkwTH1ZfB9w6FiPRA?usp=sharing
Q: Can you explain the problem, is there something wrong with the codecAPI of MS?
Mutex or Semaphore help wanted
OK, this is the scoop. I have this crazy project that is based on MFCaptureD3D from github. The original project deals with output from a webcam and puts it on a screen. It is a high quality picture. I found a place in device.cpp to intercept the pixel values
and my goal is to process these pictures electronically, independently of what MFCaptureD3D is doing. The webcam picture has width 320 pixels and hight - 240 pixels, therefore the total number of pixels is 76,800. The logic of the project demands that those
pixels should be divided in small squares each containing about 88 pixels, theremofer the number of such squares whould be 894.
The problem I have is that taking the pixel values and storing them is a very fast process and my part that should process them is very slow, perhaps a couple of orders of magnitudes slower, q don't really know for sure. In order to try to manage the pixel
values coming over at such a speed, I decided to use 3 storage area and the code should fill them sequentually. I hoped that my code will handle such a delay. This is the code to collect the pixel values. pcom is a structure. To fill one array like pcom.S0
for instance it takes less than 7 milliseconds.
counter += 1; switch (pcom.rFlag) { case 0: pcom.s00[counter] = y0; if (counter == 76799) { pcom.rFlag = 1; counter = -1; } break; case 1: pcom.s01[counter] = y0; if (counter == 76799) { pcom.rFlag = 2; counter = -1; break; } break; case 2: pcom.s02[counter] = y0; if (counter == 76799) { pcom.rFlag = 0; counter = -1; break; } break; }
Then there is this structure:
struct OutMemStream2 { struct A1 { struct W1 { struct C1 { float x1 = 0.0; float y1 = 0.0; } c[5]; } w[10]; } a[90]; int numb_a90 = -1; float invariant = 0; } b[864]; // Expected number of spherical rectangles total
The structure is so complicated because the values x1,y1 are in fact functions of a namber of variables, some of which are digital indices. The part of the code that I want to post now is supposed to read raw pixel amplitudes from arrays pcom.S00, pcom.S01, pcom.S02 sequentially, process them and store the results x1,y1 in the structure above.
for (int ii = 0; ii < pcom.H; ii++) // pcom.H == 240 { for (int jj = 0; jj < pcom.W; jj++) // pcom.W == 320 { pixelCounter += 1; if (pixelCounter == 0) { start = std::chrono::system_clock::now(); } DefineVars(diag, ii, jj, theta, phi); // determine one of 864 spherical rectangles where the pixel in question belongs to rectParalCoo = phi / 0.0872665; // 0.0872665 == 5 degrees in radians colNum = std::floor(rectParalCoo); rectMeridCoo = theta / 0.0872665; rowNum = std::floor(rectMeridCoo); int bNumber = rowNum * 72 + colNum; // bNumber is the sequential number of the spherical rect. b[bNumber].numb_a90 += 1; std::cout << " " << bNumber << " " << b[bNumber].numb_a90 << " " << pcom.rFlag << endl; // just determined theta & phi angles for individual pixels // find Spherical Harmonic for the pixels and to pull pixel's amplitude switch (pcom.rFlag){ case 0: value = pcom.s01[pixelCounter]; <== VERY FAST break; case 1: value = pcom.s02[pixelCounter]; break; case 2: value = pcom.s00[pixelCounter]; break; } llCou = -1; for (int ll = pcom.llMin; ll < pcom.llMax; ll++) // 10 values of ll { llCou += 1; mmCou = -1; // VERY SLOW PART for (int mm = pcom.mmMin; mm <= pcom.mmMax; mm++) // 5 values of mm { mmCou += 1; res = (double)LegendrePolynomials::normSelector(cos(theta), ll, mm); res *= pcom.factorArr[(int)diag]; // peripheral attenuation/modulation res *= (float)value; // int value is the amplitude of signal at a pixel // selection of the pixel input point by point BytesWritten += sizeof(float); b[bNumber].a[b[bNumber].numb_a90].w[llCou].c[mmCou].x1 = res * cos(phi * (float)mm); b[bNumber].a[b[bNumber].numb_a90].w[llCou].c[mmCou].y1 = res * sin(phi * (float)mm); // pfRArray [pixelCounter] = res * cos(phi * (float)mm); // real part // pfIArray [pixelCounter] = res * sin(phi * (float)mm); // imaginary part } } } }
It is a simple piece of code, however it does not work because of the difference in speed. I run into exceptions and the pixels end up all being messed up. I know where the solution lies. I need to use either a mutex or a semaphore, but I don't know how
to attach them to my code. I would appreciate if somebody will offer a helping hand.
Thank you - MyCatAlex
playing media sources sequentially and help with Sequencer Source sample
Hi,
I am trying to write a application that will play video files one after the other without any delays. I have tried following the code in the Sequencer Source sample, however the function hr = GetCollectionObject(pSourceNodes, 0, &pNode); called in HRESULT CPlaylist::GetDurationFromTopology(IMFTopology* pTopology, LONGLONG* phnsDuration) is missing in the sample. Anyone know what the function GetCollectionObject should be?
Any other samples or examples of playing media sequentially?
I am just starting to learn about the Media Foundation, if you know of any good books or articles on it, please let me know also.
Thanks
Call IMFActivate::ActiveObject crash
Hi. I have a problem on a special PC. A part of code as follow
unFlags = MFT_ENUM_FLAG_HARDWARE;hr = MFTEnumEx(MFT_CATEGORY_VIDEO_ENCODER,
unFlags,
&input, // Input type
&output, // Output type
&ppActivate,
&count);
if (SUCCEEDED(hr) && count == 0)
{
hr = MF_E_TOPO_CODEC_NOT_FOUND;
}
// Create the first decoder in the list.
if (SUCCEEDED(hr))
{
for(int i = 0; i < count; ++i)
{
hr = ppActivate[i]->ActivateObject(IID_PPV_ARGS(&pCodec));
}
}
MFTEnumEX returns S_OK ;
count is 1;
when execute ActivateObject, the application is crash.
error code is 0xc000041D
I tried to update windows and drivers. but not work.
Current solution is catching the crash and use other encoder to do this job
Is anyone have this issue?
how can i solve the problem in another way?
Tks
External camera triggers with Media Foundation
Hi,
I need to be able to use external physical buttons on a UVC web cam with Media Foundation (ideally). When pushed, the external button will need to be detected so the application I'm working on can save the current camera framebuffer
image. I'm using C++ with Win32 APIs.
DirectShow has the ability to access an external button via IAMVideoControl::SetMode and using VideoControlFlags + VideoControlFlags_ExternalTriggerEnable, but I can't seem to find the equivalent functionality when using Media Foundation.
Is my only option to rewrite everything in DirectShow (which is deprecated, I think) or is there a way to get things working with Media Foundation? Also, I need support for Windows 7 and up.
Thanks!
Capturing an image from photo stream, using SourceReader, gives video stream buffer for first still trigger
I have developed the desktop camera application using SourceReader technique in MediaFoundation.
The features of application are following:
1. Video Streaming
2. Still Capture and
3. Video Capture.
The SourceReader is in asynchronous mode, capable of video streaming and capturing an image. But I am facing a major issue while capturing an image from photo stream.
Issue description:
At very first still trigger from photo stream, I am receiving video streaming buffer sample, in OnReadSample Callback, with dwStreamIndex as 0(Zero).
(Eg: Video Streaming resolution is 1280 x 720 and Still resolution is1920 x 1080, Received buffer for first still trigger : 1280 x 720)
This issue occurs in following scenarios :
1. First still trigger after Launch the application.
2. After change in the video resolution and still trigger .
For consecutive still triggers, I'm receiving the sample from photo stream as expected , in OnReadSample callback, with dwStreamIndex as 1(one).
The above issue is recreated only with change in the video resolution and following still trigger. Whereas, not for change in the still resolution.
Why am i receiving video streaming buffer instead of photo stream buffer for the first still trigger? Am I missing something to configure for photo stream ? If yes, help me resolve this issue.
Thanks in advance.
IMFSourceReader.ReadSample never hitting callback after reading on stream index 1. Calls to Stream index 0 work fine.
So background im working on reworking an application that was using direct show to use Windows Media Foundation. in Directshow i have UVC camera still pins working fine. however when i switched to using a SourceReader in WMF i have stream 0 (the live video
stream) however when i use the same interface to try and request samples on Stream1 i dont receive anything. This is with the following call.
hr = StreamReader.ReadSample(1,
MediaFoundation.ReadWrite.MF_SOURCE_READER_CONTROL_FLAG.None,
IntPtr.Zero,
IntPtr.Zero,
IntPtr.Zero,
IntPtr.Zero
);
if i switch it to
MediaFoundation.ReadWrite.MF_SOURCE_READER_CONTROL_FLAG.Drain,
IntPtr.Zero,
IntPtr.Zero,
IntPtr.Zero,
IntPtr.Zero
);
i receive only null IMFSamples. ive checked the state of hr and it is always S_OK. During this time i am also running the same call but on stream 0 and it is working fine. The only error or flag i get is StreamTick on the first frame on stream 0.
Im not entirely sure where to go from here if anyone has suggestions im open
[Announcement] “Media Foundation Development for Windows Desktop” Forum will be migrating to a new home on Microsoft Q&A!
This “Media Foundation Development for Windows Desktop” Forum will be migrating to a new home on Microsoft Q&A!
We’ve listened to your feedback on how we can enhance the forum experience. Microsoft Q&A allows us to add new functionality and enables easier access to all the technical resources most useful to you, like Microsoft Docs and Microsoft Learn.
Now until July 26, 2020:
- You can post any new questions onMicrosoft Q&A or here.
From July 27, 2019 until August 10, 2020:
- New posts– We invite you to post new questions in the “Media Foundation Development for Windows Desktop” forum’s new home on Microsoft Q&A. The current forum will not allow any new questions.
- Existing posts– Interact here with existing content, answer questions, provide comments, etc.
August 10, 2020 onward:
- This forum will be closed to all new and existing posts and all interactions will be inMicrosoft Q&A.
We are excited about moving toMicrosoft Q&A and seeing you there.
"Win32 API" forum will be migrating to a new home on
Microsoft Q&A !
We invite you to post new questions in the "Win32 API" forum’s new home on
Microsoft Q&A !
For more information, please refer to the
sticky post.
How to merge two vides or two clips.
Hi,
I'm going to merge two mp4 files or two clips into one file using IMFSourceReader and IMFSinkWriter. But the data of the second file or clip sometimes can not be written successfully. What's the problem could be? The timestamps of video stream and audio stream mismatch, or the key frame missed, or the media type is incorrect? Please help.
Dshowbridge enable between Media foundation and Directshow
If I enable Dshowbridge in camera device or system, what portion in MF & Directshow is enable?
could you share the more detail for Dshowbridge?