Quantcast
Channel: Media Foundation Development for Windows Desktop forum
Viewing all 1079 articles
Browse latest View live

Windows Media Player doesn’t seek properly with custom MFT video decoder

$
0
0

I wrote custom Asynchronous Direct3D-Aware MFT video decoder. After registering it works correctly in my application based on IMFMediaSession and in WM Player. But I faced with unresolved issue.I get a deadlock, if I do the following

1) Start Windows Media Player

2) Open any video file with appropriate encoding

3) Turn repeat on

4) Play file and wait when it start repeat (it doesn’t matter once ore many times)

5) Try to seek playing (doesn’t matter backward or forward)

Player stops playing and there is no chance to repair it except restarting it.

If I don’t turn on repeat option in WMP or make seek operation before the very first repeat, everything works properly.

Researching debug logs and MFTrace.exe logs I discovered the following:

When I try to seek BEFORE play repeat I see

16980,219C 11:47:59.12676 CMFTransformDetours::ProcessMessage @09B5D878 Message type=0x00000000 MFT_MESSAGE_COMMAND_FLUSH, param=00000000

....

 

16980,46FC 11:47:59.13065 CMFTransformDetours::ProcessMessage @09B5D878 Message type=0x10000001 MFT_MESSAGE_NOTIFY_END_STREAMING, param=00000000

 

....

 

16980,4864 11:47:59.13077 CKernel32ExportDetours::OutputDebugStringA @ WebmMfSource::OnCommand (begin): cmds.size=2 cmd.back.kind=0

16980,4864 11:47:59.13082 CKernel32ExportDetours::OutputDebugStringA @ WebmMfSource::Command::OnStart: time_ns=6627226800 time[sec]=6.62723

16980,4864 11:47:59.13086 CMFByteStreamDetours::Seek @07AFB598 Seeked to (origin 0x00000000, offset 0B, flags 0x00000001), new current position 0B

16980,4864 11:47:59.13088 CKernel32ExportDetours::OutputDebugStringA @ MkvReader::Seek: called pStream->Seek; hr=0x0; pos=0; curr_pos=0

16980,4864 11:47:59.13097 CKernel32ExportDetours::OutputDebugStringA @ WebmMfSource::OnCommand (end): cmds.size=1 cancel=true

16980,3C78 11:47:59.13101 CMFMediaSourceDetours::EndGetEvent @053142A8 Met=206 (null), value @0531A718,

16980,219C 11:47:59.13103 CMFMediaStreamDetours::EndGetEvent @0531A718 Met=202 MEStreamStarted, value 66272268,

16980,3C78 11:47:59.13104 CMFMediaSourceDetours::EndGetEvent @053142A8 Met=201 (null), value 66272268,

 

....

 

16980,219C 11:47:59.13207 CMFMediaSourceDetours::EndGetEvent @05F73C38 Met=206 (null), value @05F15AF0,

16980,219C 11:47:59.13207 CMFQualityManagerDetours::NotifyQualityEvent @07B09448 Object=0x05F73C38 Event=0x0AD7C5E0 Type=206

16980,219C 11:47:59.13209 CMFMediaSourceDetours::EndGetEvent @05F73C38 Met=203 MESourceSeeked, value 66272268,

16980,46FC 11:47:59.13212 CMFMediaStreamDetours::EndGetEvent @05F15AF0 Met=204 (null), value 66272268,

 

....

 

16980,46FC 11:47:59.13247 CMFTransformDetours::ProcessMessage @09B5D878 Message type=0x10000003 MFT_MESSAGE_NOTIFY_START_OF_STREAM, param=00000000

And the same log AFTER first repeat

 

16856,124C 11:54:08.90724 CMFTransformDetours::ProcessMessage @08F1D878 Message type=0x00000000 MFT_MESSAGE_COMMAND_FLUSH, param=00000000

 

....

 

16856,39E0 11:54:08.91148 CMFTransformDetours::ProcessMessage @08F1D878 Message type=0x10000001 MFT_MESSAGE_NOTIFY_END_STREAMING, param=00000000

 

....

 

16856,43CC 11:54:08.91162 CKernel32ExportDetours::OutputDebugStringA @ WebmMfSource::OnCommand (begin): cmds.size=2 cmd.back.kind=0

16856,43CC 11:54:08.91168 CKernel32ExportDetours::OutputDebugStringA @ WebmMfSource::Command::OnStart: time_ns=7919327700 time[sec]=7.91933

16856,43CC 11:54:08.91173 CMFByteStreamDetours::Seek @0B0FD2F8 Seeked to (origin 0x00000000, offset 0B, flags 0x00000001), new current position 0B

16856,43CC 11:54:08.91175 CKernel32ExportDetours::OutputDebugStringA @ MkvReader::Seek: called pStream->Seek; hr=0x0; pos=0; curr_pos=0

16856,43CC 11:54:08.91180 CKernel32ExportDetours::OutputDebugStringA @ WebmMfSource::OnCommand (end): cmds.size=1 cancel=true

16856,1FB4 11:54:08.91184 CMFMediaSourceDetours::EndGetEvent @0535A4B8 Met=206 (null), value @0535A8D0,

16856,124C 11:54:08.91187 CMFMediaStreamDetours::EndGetEvent @0535A8D0 Met=202 MEStreamStarted, value 79193277,

16856,1FB4 11:54:08.91189 CMFMediaSourceDetours::EndGetEvent @0535A4B8 Met=201 (null), value 79193277,

 

....

 

16856,124C 11:54:08.91294 CMFMediaSourceDetours::EndGetEvent @07490CE0 Met=206 (null), value @0BD74AA0,

16856,124C 11:54:08.91295 CMFQualityManagerDetours::NotifyQualityEvent @07497010 Object=0x07490CE0 Event=0x0BDEA848 Type=206

16856,124C 11:54:08.91297 CMFMediaSourceDetours::EndGetEvent @07490CE0 Met=203 MESourceSeeked, value 79193277,

16856,39E0 11:54:08.91300 CMFMediaStreamDetours::EndGetEvent @0BD74AA0 Met=204 (null), value 79193277,

 

....

 

16856,1FB4 11:54:08.91357 CMFTransformDetours::ProcessMessage @07B30264 Message type=0x00000000 MFT_MESSAGE_COMMAND_FLUSH, param=00000000

16856,1FB4 11:54:08.91396 CMFPresentationClockDetours::GetTime @07496E90 Time 39296461hns

16856,1FB4 11:54:09.91465 CMFPresentationClockDetours::GetTime @07496E90 Time 49303350hns

 

In first case Media Foundation framework sends message MFT_MESSAGE_NOTIFY_START_OF_STREAM and player starts playing again. But in second case framework sends   MFT_MESSAGE_COMMAND_FLUSHmessageinstead and player dies forever.

Did anybody meet with such behavior? 

Media Foundation AAC Encoder not respecting number of output channels for stereo

$
0
0

Hi,

I'm implementing an AAC encoder using Windows Media Transforms. I've set the number of input channels to 2 for both my input and output media type. When I use GetOutputAvailableType() to retrieve the IMFMediaType which I've set for the output and then call output_type->GetUINT32(MF_MT_AUDIO_NUM_CHANNELS, &num); the value of num is 1 instead of the number of requested channels: 2.   This means that the AudioSpecificConfig which is used by decoders is also wrong. 

I didn't find any related information on MSDN which could explain why the number of input channels changes. 

Why is the number of audio channels changing?

Thanks
roxlu

Video Recording Hangs on IMFSinkWriter->Finalize();

$
0
0

I have an issue when finalizing a video recording into an .mp4 using Media Foundation where the call to IMFSinkWriter->Finalize(); hangs forever. It doesn't always happen, and can happen on almost any machine (seen on Windows server, 7, 8, 10). Flush() is called on the audio and video streams before hand and no new samples are added between Flush and Finalize. Any ideas on what could cause Finalize to hang forever?

Things I've tried:

  • Logging all HRESULTs to check for any issues (was already checking them before proceeding to the next line of code)

Everything comes back as S_OK, not seeing any issues

  • Added the IMFSinkWriterCallback on the stream to get callbacks when the stream process markers (adding markers every 10 samples) and finishes Finalize()

Haven't been able to reproduce since adding this but this would give the best information about what's going on when I get it working.

  • Searched code samples online to see how others are setting up the Sink Writer and how Finalize() is used

Didn't find many samples and it looks like my code is similar to the ones that were found

  • Looked at encoders available and used by each system including version of the encoder dll

Encoders varied between AMD H.264 Hardware MFT Encoder and H264 Encoder MFT on machines that could reproduce the issue. Versions didn't seem to matter and some of the machines were up to date with video drivers.

How to capture videos from webcam by using Media Foundation on Win7

$
0
0


I'm new to medial development of Media Foundation.

I'm writing a program that captures live video from webcam, displays on screen and processes every frame.

I studied documents.

If use Media Session, I can only display videos from files but cannot display videos from webcam.

So I decided not to use Media Session.

I manually obtained Media Source and Source Reader to get samples.

But I don't know how to render the raw samples on screen by using Media Sinks.

Every buffer obtained from IMFSample containes YUY2 image.

I converted it to RGB24 format and showed it on screen by using StretchBlt function.

But all I get is grayscale video.

When I looked at the buffer in debugging mode. all U and V values in YUY images are 0x80.

What is wrong?

Please help me.

How to set exposure time of webcam in Media Foundation

$
0
0

Hello everyone,

I am a stranger to Media Foundation.

I have managed to develop a simple program that captures video frames from a web camera and draws them on the screen.

The camera that I use is a infrared light camera and the images read from it is too bright.

So I want to adjust exposure time of the camera, but I can't find out how to do it.

And I wonder whether there is such a feature in Media Foundation.

Please help me if anyone knows it.

Video artifacts while feeding H264 decoder from live source.

$
0
0
Hello gentlemen.

I wrote RTMP source component that implements IMFMediaSource and IMFMediaEngineExtension.
Predominantly I use it as Media Engine extension.

The problem I got is video artifacts that happens if Microsoft H264 decoder
don't receive enough frames for bufferization before playback starts.

I configured IMFMediaEngine with MF_MEDIA_ENGINE_REAL_TIME_MODE
and additionaly call IMFMediaEngineEx -> SetRealTimeMode(TRUE))
but it don't takes effect.

I'm waiting for MF_MEDIA_ENGINE_EVENT_FIRSTFRAMEREADY event before start playback,
but it don't takes effect too.

MF_MEDIA_ENGINE_EVENT_BUFFERINGSTARTED, MF_MEDIA_ENGINE_EVENT_BUFFERINGENDED
these events never happen.

The only thing that helped me is statical delay 2-5 seconds before playback starts.
But sometimes after some playback time (~15 min) artifacts happen again.
-----------
So my question is how to configure Media Engine or my RTMP source to avoid this 
bufferization problem and video artifacts ?

possible WMF AAC encoder bug on Win10

$
0
0

Hi guys, 

Recently I found a problem in our program that may be caused by a WMF AAC encoder bug. Today I was able to produce it with Windows SDK sample project.

I changed the Transcode sample project comes with Windows SDK from transcoding to wmv to transcoding to MP4 for my test. I've uploaded the code change here: https://dl.dropboxusercontent.com/u/89678527/change.patch

I used generated executable to transcode https://dl.dropboxusercontent.com/u/89678527/sync_test.mp4 to a mp4. The total length of the audio in source is 0:01:30.163. While files generated on Win7 & Win8 have same duration of audio, file generated on Win10 (Build 10240) has audio duration of only 0:01:29.512 and thus you can see the audio content is different from the source. (watch the position of last pulse, they're different). See the diff here https://dl.dropboxusercontent.com/u/89678527/Capture.PNG

I've also updated the executable here https://dl.dropboxusercontent.com/u/89678527/Transcode.rar so you can have a try directly.

Thanks



Acceptable resolutions for the IMFSinkWriter using the MFVideoFormat_H264 format

$
0
0

I'm encoding a series of frames using the IMFSinkWriter set to the MFVideoFormat_H264 video format but I've noticed that certain resolutions are not accepted.  For example, when I set the resolution to 801 x 600, it gives me error code -2147418113 when I try to write a sample.  800 x 600 works, obviously.  I haven't found any documentation that states any sort of resolution limitation.  Can someone enlighten me? 

Thanks,

Jess


Tee Node and Custom MFT

$
0
0

I have a topology as below, it works fine for all frames of video sequence:

Source --> Video Decoder MFT --->Custom MFT ---> Video Encoder MFT ---> File Sink MFT

If I insert a Tee node between "Custom MFT" & "Video Encoder MFT", to render the preview usingEVR, then the application doesn't run for all frames of the input video. I mean it ends early, I see just 50% of frames in output file. Even I verified it by keeping process counter in "Custom MFT".

I tried to figure out the cause using mftrace, but it didn't help much. Is there a possibility of frame skips by any node if custom MFT takes too long time?  Can topology behavior change if 2 sinks run at different speed? Does topology stop If one of the sinks finish early? Is there a chance of frames getting dropped at input side of "Custom MFT"?

I experimented with 2 resolutions to check the behavior, this issue observed only when I try with UHD (3840x2160) resolution video. It works well for HD (1920x1080) resolution.

I have done some more experiments by changing "Tee Node" properties MF_TOPONODE_PRIMARYOUTPUT and MF_TOPONODE_DISCARDABLE. But Nothing helped.

Can someone give insights to figure out the cause for this behavior? Is there a property that I can set to avoid frame drops/skips in the entire topology? No node should drop any frames, it's OK to run with lesser speed.



Tee Node and Custom MFT

$
0
0
  • I have a topology as below, it works fine for all frames of video sequence:

    Source --> Video Decoder MFT --->Custom MFT ---> Video Encoder MFT ---> File Sink MFT

    If I insert a Tee node between "Custom MFT" & "Video Encoder MFT", to render the preview usingEVR, then the application doesn't run for all frames of the input video. I mean it ends early, I see just 50% of frames in output file. Even I verified it by keeping process counter in "Custom MFT".

    I tried to figure out the cause using mftrace, but it didn't help much. Is there a possibility of frame skips by any node if custom MFT takes too long time?  Can topology behavior change if 2 sinks run at different speed? Does topology stop If one of the sinks finish early? Is there a chance of frames getting dropped at input side of "Custom MFT"?

    I experimented with 2 resolutions to check the behavior, this issue observed only when I try with UHD (3840x2160) resolution video. It works well for HD (1920x1080) resolution.

    I have done some more experiments by changing "Tee Node" properties MF_TOPONODE_PRIMARYOUTPUT and MF_TOPONODE_DISCARDABLE. But Nothing helped.

    Can someone give insights to figure out the cause for this behavior? Is there a property that I can set to avoid frame drops/skips in the entire topology? No node should drop any frames, it's OK to run with lesser speed.

Running Windows Service to invoke an external EXE with Service Account

$
0
0

Hi

As per the requirement I need to run a windows service with a service Account which don't have interactive logon.

I was able to run the service but in my service I have code to run an external EXE , but its not happening. If I try with Local System Account the EXE is invoking from the service. Please let me know why it is not happening with a service account which don't have the interactive logon permission

Thanks


Thanks

Create network streaming sink - howto?

$
0
0
Hi!

Is there an example, how to use the MFCreateASFStreamingMediaSinkActivate function?
The documentation is pretty poor on MSDN.
How to get/prepare the first parameter (*ByteStreamActivate) when the main goal is to stream ASF over network?
In the SDK samples there is nothing about that.

Thank you in advance!

codecs

$
0
0

I want to burn CD's. I cannot do so due to a problem with Codecs. I really don't know what Codecs are I just want to resolve this. What do I do?

H264 Decoder MFT ProcessOutput and "CopyDecodedFrame failed"

$
0
0

I am using the H.264 video decoder on a Win 10 system from a regular GUI (Qt) application and in this context all works well.

I am also attempting to use this from within a Unity application (as a native plugin), and in this case it fails.   Specifically, after we getMF_E_TRANSFORM_STREAM_CHANGE fromProcessOutput, and provided ProcessOutput does not return MF_E_TRANSFORM_NEED_MORE_INPUT, then we get a "WinRT Originate Error".  In the working scenario, we would getS_OK and the actual frame would be returned.  Attaching with debugger we see that the error detail is:

error=E_FAIL

message=CopyDecodedFrame failed

Sounds like an internal error.  The allocated buffer size is sufficiently large and is based on the size returned by GetOutputStreamInfo. As a precaution we doubled thecbSize but that made no difference.  cbAlignmentis zero.

It should be noted that from a Unity app, it isn't clear that there is a messaging loop or that we are called from a thread with a valid message loop.  So, to be cautious, we created a message loop thread (GetMessage / DispatchMessage) and made sure all interaction with the H264 decoder was performed from that thread only.  We also made sure toCoInitializeEx() from this thread.  Another important thing of note is that Unity uses DirectX9 or DX11, so if the MFT H264 decoder depends on GPU or DX in some way, it could be they clash.

Could someone with visibility into the codec or with prior knowledge help us understand possible causes for "CopyDecodedFrame failed" ?

TIA.


Note for Microsoft team : Design Flaw when it comes to queueing and playing Topologies that have custom and non-custom Live Media Sources mixed

$
0
0

Hello

I want to leave this note here for Microsoft's Media Foundation Developers. There is a big Design Flaw when it comes to queueing and playing Topologies that have custom and non-custom Live Media Sources mixed.

To be precisely it is the switching from one presentation to the next. It is simply not possible with Live Sources. Well it is possible with self written sources, but if you mix these with others like Device Sources ( Webcam, Micro, etc ... ) for example, then there is no way of switching over to the next presentation. My original post ( Link ) is still unanswered, and although i never got any answer to my questions here on the forum  ( while i am answering others ) i will leave this here so that you can fix the problem in the future ( this is directed @ Microsoft's Developers ).

I worked for almost 2-3 weeks on the problem over the past 6 months. Since i had a lot of work to do i put it aside, but now i dedicated an entire week to it and made 100 % sure all possibilitys were considered. First i will lay out my scenario then describe the problem of the Media Session Class.

Scenario :

Basically i have Recorder which captures Video and Audio from several Live Sources ( some of them are self written/custom ). A recording starts or stops when the user is pressing a hotkey. So on key press i am either starting or stopping the Media Session.

My Topology can have a lot of changes from one recording to the other, namely removing complete branches with up 6 nodes each. This is due to the user might change application settings before he starts a new recording. Since the topology changes can be huge i am simply creating the topology for each recording using a settings struct from the application. So i create the Media Session on start of the application and when the user starts a recording i create a topology and queue it.

Problem :

The Problem is that i cant play any other Topology than the first one, because when calling start the Media Session still sees the first queued Topology as the "Current Presentation", as there are no MEEndOfStream and thus no MEEndOfPresentation events from the Live Sources. When i press stop and and call ClearTopologies or even SetTopology with the MFSESSION_SETTOPOLOGY_CLEAR-CURRENT flag, then the Media Session still keeps the Topology inside. Since Live Sources do not send an MEEndOfStream Event, there is never a transition to the next presentation. If i stop the Media Session and shutdown all Sources, Sinks and async Transforms ( IMFShutdown ), and then clear the topology, then it will still exist inside the Media Session and is considered as the "Current Presentation".

Design Flaw :

The Media Session has no Remove or Delete function ( like the Sequencer Source has ) for Topologies, there is only SetTopology. You cant remove a Topology completly from the Media Session by hand and since Live Sources never have an "End of Stream" it is not possible to switch over to the next Topology.

Workaround :

There are currently only 2 ways of doing it :

1. Create a Media Session for every topology you want to play. This is very inefficient when you only want to queue and play one Topology at a time ( like in my case ).

2. Send an MEEndOfStream event inside your custom Live Sources. Since the stream literally has no end its kinda fraud here. It works, but only with self written sources. If you mix your self written sources with others then its not working ( like in my case ).

Suggestions :

The Media Session and also the Sequencer Source are obviously designed for dealing with files/sources that have a predetermined duration. The Media Session is nearly useless for Live Sources in its current implementation. If you look at the 2 Workarounds i described then its obvious that you ( Microsoft ) have to change the Programming Model of the Media Session. Your CPlayer Sample is exemplary, and what you should bring to us developers is a refined Media Session and a CRecorder Sample.

Even though you have the Live Source flag under Source Characteristics, there is not much Attention to Live Sources at all in the Media Foundation Documentation. That alone should start you scratching head because Webcams, Microphones, Stream Servers and several others are all Live Sources and very common.

Of course i cant wait for you to fix this problem with Windows 12 or something..., so i have to rely on the very inefficient workaround of creating a Media Session for each Topology i want to play. I can tell you i was close of detouring the MF API functions only to be able to send an MEEndOfStream for every stream, even when i not own them. There is obviously a big Design Flaw when it comes to encoding Live Sources in a Media Session and i hope you will change that in the near future.

co0Kie ( very sad and almost dead  ) signing off

... 






Can anyone get IMFMediaEngine->TransferVideoFrame() to work with an IWICBitmap?

$
0
0

I've been searching around and seen this question asked in a couple places but never with any reply.

Working from the media engine sample:

https://code.msdn.microsoft.com/windowsapps/Media-Engine-Playback-ce1c82f0

All I've done is add construction of an IWICBitmap:

MEDIA::ThrowIfFailed(
CoCreateInstance(CLSID_WICImagingFactory,
NULL, CLSCTX_INPROC_SERVER,
IID_PPV_ARGS(&mPiFactory))
);

MEDIA::ThrowIfFailed(
&mWicBitmap)
mPiFactory->CreateBitmap(640, 480, GUID_WICPixelFormat24bppBGR, WICBitmapCacheOnDemand, &mWicBitmap)
&mWicBitmap)
);

and then in OnTimer(), added this above the current TransferVideoFrame:

m_spMediaEngine->TransferVideoFrame(mWicBitmap.Get(), nullptr, &r, &m_bkgColor);

but it always fails. I've tried adding the MFVideoNormalizedRect arg (0, 0, 1, 1), but that makes no difference.

I've tried using different pixel formats for the WICBitmap, which varies the error a little. I.e.,

GUID_WICPixelFormat24bppRGB

GUID_WICPixelFormat24bppBGR

give me "One or more arguments are invalid."

I've tried matching the format that the sample's DX11 texture is created in but

GUID_WICPixelFormat32bppBGRA

gives me "No such interface supported."

Is there something obvious I'm missing here?

thanks much!

Want volume control on individual media file, not application-wide one.

$
0
0

I am migrating my own project which was coded base on DirectShow, converting them to media foundation.

In DShow there is an interface IBasicAudio, having method put_Volume / set_Volume, can easily set / get volume per media file.

But in case of MF, volume control becomes a mission impossible. Despite, compare with DShow, obtaining interfaces for volume control is more complicated, even I get the IMFSimpleAudioVolume interface, it works on application-wide manner, that is, exactly do the same thing as I tune the application volume in SndVol.exe. It is absolutely unacceptable for me, because I need to control the volume of different component within same application individually, such as mute (or decrease the amplitude of) the BGM playing using MF, but keeping have SEs playing using XAudio2, etc.

I tried IMFAudioStreamVolume, but this seems indeed useless. (poorer than application-wide IMFSimpleAudioVolume)

The question is, is there any way to do the same task as IBasicAudio::put_Volume in MF? If YES, please kindly also provide code examples if any; or if NO, please provide alternate solutions. (Keeping using DShow is a considerable solution, if this is the only one, although really not preferred to do so.)

Get the codec delay and block size from the Windows Mp3 Encoder

$
0
0
Is it possible to retrieve the codec delay and block size from the Windows Mp3 Encoder?

Reading script commands

$
0
0
How to read script commands from a stream using topology? 

Windows Media Video 9 Screen Encoder

$
0
0
Hi
I'm trying to use Windows Media Video 9 Screen Encoder with no result.
I have a working code for a WMV encoder and try to configure the Screen Encoder in the same way.
Look at my code, please
https://gist.github.com/t-artikov/c45338c5ac9d49d9fbb0

First, I register the Screen Encoder, so that it could be found by MFTEnumEx function.
Then I create input and output media types and instantiate an encoder transform.
Next I set media types to the transform.
Before set the output media type I should add codec private data to it.
A problem occurs at this step, 
I can get the private data size (privateData->GetPrivateData(nullptr, &dataSize)), 
but getting the data itself (privateData->GetPrivateData(&data[0], &dataSize)) returns E_NOTIMPL error.
Viewing all 1079 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>